This Week In RustThis Week in Rust 627

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is grapheme-utils, a library of functions to ergonomically work with UTF graphemes.

Thanks to rustkins for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • Rustikon 2026 | CFP closes 2025-11-24 | Warsaw, Poland | 2025-03-19 - 2025-03-20 | Event Website
  • TokioConf 2026 | CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
  • RustWeek 2026 | CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

456 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Only a handful of performance-related changes landed this week. The largest one was changing the default name mangling scheme in nightly to the v0 version, which produces slightly larger symbol names, so it had a small negative effect on binary sizes and compilation time.

Triage done by @kobzol. Revision range: 6159a440..b64df9d1

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.9% [0.3%, 2.7%] 48
Regressions ❌
(secondary)
0.9% [0.2%, 2.1%] 25
Improvements ✅
(primary)
-0.5% [-6.8%, -0.1%] 33
Improvements ✅
(secondary)
-0.5% [-1.4%, -0.1%] 53
All ❌✅ (primary) 0.4% [-6.8%, 2.7%] 81

1 Regression, 2 Improvements, 5 Mixed; 1 of them in rollups 28 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust

No Items entered Final Comment Period this week for Compiler Team (MCPs only), Cargo, Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-11-26 - 2025-12-24 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Also: a program written in Rust had a bug, and while it caused downtime, there was no security issue and nobody's data was compromised .

Josh Triplett on /r/rust

Thanks to Michael Voelkl for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogCelebrating the contributors that power Mozilla Support

Every day, Firefox users around the world turn to Mozilla Support (SUMO) with a question, a hiccup or just a little curiosity. It’s community-powered – contributors offer answers and support to make someone’s day a little easier.

We celebrated this global community last month with Ask-A-Fox, a weeklong virtual event that brought together longtime contributors, newcomers and Mozilla staffers. The idea was simple: connect across time zones, trade tips and yes, answer questions.

Contributor appreciation, AMAs and an emoji hunt

For one lively week, contributors across Firefox and Thunderbird rallied together. Reply rates soared, response times dropped, and the forums buzzed with renewed energy. But the real story was the sense of connection.

There were live Ask Me Anything sessions with Mozilla’s WebCompat, Web Performance, and Thunderbird teams. There was even a playful 🦊 + ⚡ emoji hunt through our Knowledge Base.

“That AMA was really interesting,” said longtime Firefox contributor Paul. “I learned a lot and I recommend those that could not watch it live catch the recording as I am sure it will be very useful in helping users in SUMO.”

Ask-A-Fox was a celebration of people: long-time contributors, brand-new faces and everyone in between. Here are just a few standout contributors:

  • Firefox Desktop (including Enterprise)
    Paul, Denyshon, Jonz4SUSE, @next, jscher2000
  • Firefox for Android
    Paul, TyDraniu, GerardoPcp04, Mad_Maks, sjohnn
  • Firefox for iOS
    Paul, Simon.c.lord, TyDraniu, Mad_Maks, Mozilla-assistent
  • Thunderbird (including Android)
    Davidsk, Sfhowes, Mozilla98, MattAuSupport, Christ1

Newcomers mozilla98, starretreat, sjohnn, Vexi, Mark, Mapenzi, cartdaniel437, hariiee1277, and thisisharsh7 also made a big impact.

New contributor Shirmaya John said, “I love helping people, and I’m passionate about computers, so assisting with bugs or other tech issues really makes my day. I’m excited to grow here!” 

Contributor Vincent won our Staff Award for the highest number of replies during the week.

“Ask a Fox highlights the incredible collaborative spirit of our community. A reminder of what we can achieve when we unite around a shared goal,” said Kiki Kelimutu, a senior community manager at Mozilla.

Firefox has been powered by community from the start

As Mozilla’s communities program manager, I’ve seen firsthand how genuine connection fuels everything we do. Members of our community aren’t just answering questions; they’re about building relationships, learning together, and showing up for one another with authenticity and care.

Mozilla is built by people who believe the internet should be open and accessible to all, and our community is the heartbeat of that vision. What started back in 2007 (and found its online home in 2010 at support.mozilla.org) has grown into a global network of contributors helping millions of Firefox users find answers, share solutions and get back on their Firefox journey.

Every question answered not only helps a user, it helps us build a better Firefox. By surfacing real issues and feedback, our community shapes the course of our products and keeps the web stronger for everyone.

Join the next Ask-A-Fox

Ask-A-Fox is a celebration of what makes Mozilla unique: our people.

As someone who’s spent years building communities, I know that lasting engagement doesn’t come from numbers or dashboards. It comes from treating contributors as individuals — people who bring their own stories, skills, and care to the table.

When Mozillians come together to share knowledge, laughter or even a few emojis, the result is more than faster replies. It’s a connection.

Two more Ask-A-Fox events are already planned for next year, continuing the work of building communities that make the web more open and welcoming.

If you’ve ever wanted to make the web a little more human, come join us. Because every answer, every conversation, and every connection helps keep Firefox thriving.

A cheerful cartoon fox head with a speech bubble containing three dots, surrounded by multiple chat bubbles on a warm orange-to-yellow gradient background. The fox appears to be communicating, evoking a friendly and conversational tone.

Join us in shaping the web

Sign up here

The post Celebrating the contributors that power Mozilla Support appeared first on The Mozilla Blog.

The Rust Programming Language BlogInterview with Jan David Nose

On the Content Team, we had our first whirlwind outing at RustConf 2025 in Seattle, Washington, USA. There we had a chance to speak with folks about interesting things happening in the Project and the wider community.

Jan David Nose, Infrastructure Team

In this interview, Xander Cesari sits down with Jan David Nose, then one of the full-time engineers on the Infrastructure Team, which maintains and develops the infrastructure upon which Rust is developed and deployed -- including CI/CD tooling and crates.io.

We released this video on an accelerated timeline, some weeks ago, in light of the recent software supply chain attacks, but the interview was conducted prior to the news of compromised packages in other languages and ecosystems.

Check out the interview here or click below.


Transcript

Xander Cesari: Hey, this is Xander Cesari with the Rust Project Content Team, recording on the last hour of the last day of RustConf 2025 here in Seattle. So it's been a long and amazing two days. And I'm sitting down here with a team member from the Rust Project Infra Team, the unsung heroes of the Rust language. Want to introduce yourself and kind of how you got involved?

Jan David Nose: Yeah, sure. I'm JD. Jan David is the full name, but especially in international contexts, I just go with JD. I've been working for the Rust Foundation for the past three years as a full-time employee and I essentially hit the jackpot to work full-time on open source and I've been in the Infra Team of the Rust Project for the whole time. For the past two years I've led the team together with Jake. So the Infra Team is kind of a thing that lets Rust happen and there's a lot of different pieces.

Xander Cesari: Could you give me an overview of the responsibility of the Infra Team?

Jan David Nose: Sure. I think on a high level, we think about this in terms of, we serve two different groups of people. On one side, we have users of the language, and on the other side, we really try to provide good tooling for the maintainers of the language.

Jan David Nose: Starting with the maintainer side, this is really everything about how Rust is built. From the moment someone makes a contribution or opens a PR, we maintain the continuous integration that makes sure that the PR actually works. There's a lot of bots and tooling helping out behind the scenes to kind of maintain a good status quo, a sane state. Lots of small things like triage tools on GitHub to set labels and ping people and these kinds of things. And that's kind of managed by the Infra Team at large.

Jan David Nose: And then on the user side, we have a lot of, or the two most important things are making sure users can actually download Rust. We don't develop crates.io, but we support the infrastructure to actually ship crates to users. All the downloads go through content delivery networks that we provide. The same for Rust releases. So if I don't do my job well, which has happened, there might be a global outage of crates.io and no one can download stuff. But those are kind of the two different buckets of services that we run and operate.

Xander Cesari: Gotcha. So on the maintainer side, the Rust organization on GitHub is a large organization with a lot of activity, a lot of code. There's obviously a lot of large code bases being developed on GitHub, but there are not that many languages the size of Rust being developed on GitHub. Are there unique challenges to developing a language and the tooling that's required versus developing other software projects?

Jan David Nose: I can think of a few things that have less to do with the language specifically, but with some of the architecture decisions that were made very early on in the life cycle of Rust. So one of the things that actually caused a lot of headache for mostly GitHub, and then when they complained to us, for us as well, is that for a long, long time, the index for crates.io was a Git repo on GitHub. As Rust started to grow, the activity on the repo became so big that it actually caused some issues, I would say, in a friendly way on GitHub, just in terms of how much resources that single repository was consuming. That then kind of started this work on a web-based, HTTP-based index to shift that away. That's certainly one area where we've seen how Rust has struggled a little bit with the platform, but also the platform provider struggled with us.

Jan David Nose: I think for Rust itself, especially when we look at CI, we really want to make sure that Rust works well on all of the targets and all the platforms we support. That means we have an extremely wide CI pipeline where, for every Tier 1 target, we want to run all the tests, we want to build the release artifacts, we want to upload all of that to S3. We want to do as much as we reasonably can for Tier 2 targets and, to a lesser extent, maybe even test some stuff on Tier 3. That has turned into a gigantic build pipeline. Marco gave a talk today on what we've done with CI over the last year. One of the numbers that came out of doing the research for this talk is that we accumulate over three million build minutes per month, which is about six years of CPU time every month.

Jan David Nose: Especially when it comes to open source projects, I think we're one of the biggest consumers of GitHub Actions in that sense. Not the biggest in total; there are definitely bigger commercial projects. But that's a unique challenge for us to manage because we want to provide as good a service as we can to the community and make sure that what we ship is high quality. That comes at a huge cost in terms of scaling. As Rust gets more popular and we want to target more and more platforms, this is like a problem that just continues to grow.

Jan David Nose: We'll probably never remove a lot of targets, so there's an interesting challenge to think about. If it's already big now, how does this look in 5 years, 10 years, 15 years, and how can we make sure we can maintain the level of quality we want to ship? When you build and run for a target in the CI pipeline, some of those Tier 1 targets you can just ask a cloud service provider to give you a VM running on that piece of hardware, but some of them are probably not things that you can just run in the cloud.

Xander Cesari: Is there some HIL (Hardware-In-the-Loop) lab somewhere?

Jan David Nose: So you're touching on a conversation that's happening pretty much as we speak. So far, as part of our target tier policy, there is a clause that says it needs to be able to run in CI. That has meant being very selective about only promoting things to Tier 1 that we can actually run and test. For all of this, we had a prerequisite that it runs on GitHub Actions. So far we've used very little hardware that is not natively supported or provided by GitHub.

Jan David Nose: But this is exactly the point with Rust increasing in popularity. We just got requests to support IBM platforms and RISC-V, and those are not natively supported on GitHub. That has kicked off an internal conversation about how we even support this. How can we as a project enable companies that can provide us hardware to test on? What are the implications of that?

Jan David Nose: On one side, there are interesting constraints and considerations. For example, you don't want your PRs to randomly fail because someone else's hardware is not available. We're already so resource-constrained on how many PRs we can merge each day that adding noise to that process would really slow down contributions to Rust. On the other side, there are security implications. Especially if we talk about promoting something to Tier 1 and we want to build release artifacts on that hardware, we need to make sure that those are actually secure and no one sneaks a back door into the Rust compiler target for RISC-V.

Jan David Nose: So there are interesting challenges for us, especially in the world we live in where supply chain security is a massive concern. We need to figure out how we can both support the growth of Rust and the growth of the language, the community, and the ecosystem at large while also making sure that the things we ship are reliable, secure, and performant. That is becoming an increasingly relevant and interesting piece to work on. So far we've gotten away with the platforms that GitHub supports, but it's really cool to see that this is starting to change and people approach us and are willing to provide hardware, provide sponsorship, and help us test on their platforms. But essentially we don't have a good answer for this yet. We're still trying to figure out what this means, what we need to take into consideration, and what our requirements are to use external hardware.

Xander Cesari: Yeah, everyone is so excited about Rust will run everywhere, but there's a maintenance cost there that is almost exponential in scope.

Jan David Nose: It's really interesting as well because there's a tension there. I think with IBM, for example, approaching us, it's an interesting example. Who has IBM platforms at home? The number of users for that platform is really small globally, but IBM also invests heavily in Rust, tries to make this happen, and is willing to provide the hardware.

Jan David Nose: For us, that leads to a set of questions. Is there a line? Is there a certain requirement? Is there a certain amount of usage that a platform would need for us to promote it? Or do we say we want to promote as much as we can to Tier 1? This is a conversation we haven't really had to have yet. It's only now starting to creep in as Rust is adopted more widely and companies pour serious money and resources into it. That's exciting to see.

Jan David Nose: In this specific case, companies approach the Infra Team to figure out how we can add their platforms to CI as a first step towards Tier 1 support. But it's also a broader discussion we need to have with larger parts of the Rust Project. For Tier 1 promotions, for example, the Compiler Team needs to sign off, Infra needs to sign off. Many more people need to be involved in this discussion of how we can support the growing needs of the ecosystem at large.

Xander Cesari: I get the feeling that's going to be a theme throughout this interview.

Jan David Nose: 100%.

Xander Cesari: So one other tool that's part of this pipeline that I totally didn't know about for a long time, and I think a talk at a different conference clued me into it, is Crater. It's a tool that attempts to run all of the Rust code it can find on the internet. Can you talk about what that tool does and how it integrates into the release process?

Jan David Nose: Whenever someone creates a pull request on GitHub to add a new feature or bug fix to the Rust compiler, they can start what's called a Crater run, or an experiment. Crater is effectively a large fleet of machines that tries to pull in as many crates as it can. Ideally, we would love to test all crates, but for a variety of reasons that's not possible. Some crates simply don't build reliably, so we maintain lists to exclude those. From the top of my head, I think we currently test against roughly 60% of crates.

Jan David Nose: The experiment takes the code from your pull request, builds the Rust compiler with it, and then uses that compiler to build all of these crates. It reports back whether there are any regressions related to the change you proposed. That is a very important tool for us to maintain backwards compatibility with new versions and new features in Rust. It lets us ask: does the ecosystem still compile if we add this feature to the compiler, and where do we run into issues? Then, and this is more on the Compiler Team side, there's a decision about how to proceed. Is the breakage acceptable? Do we need to adjust the feature? Having Crater is what makes that conversation possible because it gives us real data on the impact on the wider ecosystem.

Xander Cesari: I think that's so interesting because as more and more companies adopt Rust, they're asking whether the language is going to be stable and backward compatible. You hear about other programming languages that had a big version change that caused a lot of drama and code changes. The fact that if you have code on crates.io, the Compiler Team is probably already testing against it for backwards compatibility is pretty reassuring.

Jan David Nose: Yeah, the chances are high, I would say. Especially looking at the whole Python 2 to Python 3 migration, I think as an industry we've learned a lot from those big version jumps. I can't really speak for the Compiler Team because I'm not a member and I wasn't involved in the decision-making, but I feel this is one of the reasons why backwards compatibility is such a big deal in Rust's design. We want to make it as painless as possible to stay current, stay up to date, and make sure we don't accidentally break the language or create painful migration points where the entire ecosystem has to move at once.

Xander Cesari: Do you know if there are other organizations pulling in something like Crater and running it on their own internal crate repositories, maybe some of the big tech companies or other compiler developers or even other languages? Or is this really bespoke for the Rust compiler team?

Jan David Nose: I don't know of anyone who runs Crater itself as a tool. Crater is built on a sandboxing framework that we also use in other places. For example, docs.rs uses some of the same underlying infrastructure to build all of the documentation. We try to share as much as we can of the functionality that exists in Crater, but I'm not aware of anyone using Crater in the same way we do.

Xander Cesari: Gotcha. The other big part of your job is that the Infra Team works on supporting maintainers, but it also supports users and consumers of Rust who are pulling from crates.io. It sounds like crates.io is not directly within your team, but you support a lot of the backend there.

Jan David Nose: Yeah, exactly. crates.io has its own team, and that team maintains the web application and the APIs. The crates themselves, all the individual files that people download, are hosted within our infrastructure. The Infra Team maintains the content delivery network that sits in front of that. Every download of a crate goes through infrastructure that we maintain. We collaborate very closely with the crates.io team on this shared interface. They own the app and the API, and we make sure that the files get delivered to the end user.

Xander Cesari: So it sounds like there's a lot of verification of the files that get uploaded and checks every time someone pushes a new version to crates.io. That part all happens within crates.io as an application.

Jan David Nose: Cargo uses the crates.io API to upload the crate file. crates.io has a lot of internal logic to verify that it is valid and that everything looks correct. For us, as the Infra Team, we treat that as a black box. crates.io does its work, and if it is happy with the upload, it stores the file in S3. From that point onward, infrastructure makes sure that the file is accessible and can be downloaded so people can start using your crate.

Xander Cesari: In this theme of Rust being a bit of a victim of its own success, I assume all of the traffic graphs and download graphs are very much up and to the right.

Jan David Nose: On the Foundation side, one of our colleagues likes to check how long it takes for one billion downloads to happen on crates.io, and that number has been falling quickly. I don't remember what it was three years ago, but it has come down by orders of magnitude. In our download traffic we definitely see exponential growth. Our traffic tends to double year over year, and that trend has been pretty stable. It really seems like Rust is getting a lot of adoption in the ecosystem and people are using it for more and more things.

Xander Cesari: How has the Infra Team scaled with that? Are you staying ahead of it, or are there a lot of late nights?

Jan David Nose: There have definitely been late nights. In the three years I've been working in the Infra Team, every year has had a different theme that was essentially a fire to put out.

Jan David Nose: It changes because we fix one thing and then the next thing breaks. So far, luckily, those fires have been mostly sequential, not parallel. When I joined, bandwidth was the big topic. Over the last year, it has been more about CI. About three years ago, we hit this inflection point where traffic was doubling and the sponsorship capacity we had at the time was reaching its limits.

Jan David Nose: Two or three years ago, Fastly welcomed us into their Fast Forward program and has been sponsoring all of our bandwidth since then. That has mostly helped me sleep at night. It has been a very good relationship. They have been an amazing partner and have helped us at every step to remove the fear that we might hit limits. They are very active in the open source community at large; most famously they also sponsor PyPI and the Python ecosystem, compared to which we're a tiny fish in a very big pond. That gives us a lot of confidence that we can sustain this growth and keep providing crates and releases at the level of quality people expect.

Xander Cesari: In some ways, Rust did such a good job of making all of that infrastructure feel invisible. You just type Cargo commands into your terminal and it feels magical.

Jan David Nose: I'm really happy about that. It's an interesting aspect of running an infrastructure team in open source. If you look at the ten-year history since the first stable release, or even the fifteen years since Rust really started, infrastructure was volunteer-run for most of that time. I've been here for three years, and I was the first full-time infrastructure engineer. So for ten to twelve years, volunteers ran the infrastructure.

Jan David Nose: For them, it was crucial that things just worked, because you can't page volunteers in the middle of the night because a server caught fire or downloads stopped working. From the beginning, our infrastructure has been designed to be as simple and as reliable as possible. The same is true for our CDNs. I always feel a bit bad because Fastly is an amazing sponsor. Every time we meet them at conferences or they announce new features, they ask whether we want to use them or talk about how we use Fastly in production. And every time I have to say: we have the simplest configuration possible. We set some HTTP headers. That's pretty much it.

Jan David Nose: It's a very cool platform, but we use the smallest set of features because we need to maintain all of this with a very small team that is mostly volunteer-based. Our priority has always been to keep things simple and reliable and not chase every fancy new technology, so that the project stays sustainable.

Xander Cesari: Volunteer-based organizations seem to have to care about work-life balance, which is probably terrific, and there are lessons to be learned there.

Jan David Nose: Yeah, it's definitely a very interesting environment to work in. It has different rules than corporations or commercial teams. We have to think about how much work we can do in a given timeframe in a very different way, because it's unpredictable when volunteers have time, when they're around, and what is happening in their lives.

Jan David Nose: Over the last few years, we've tried to reduce the number of fires that can break out. And when they do happen, we try to shield volunteers from them and take that work on as full-time employees. That started with me three years ago. Last year Marco joined, which increased the capacity we have, because there is so much to do on the Infra side that even with me working full-time, we simply did not have enough people.

Xander Cesari: So you're two full-time and everything else is volunteer.

Jan David Nose: Exactly. The team is around eight people. Marco and I work full-time and are paid by the Rust Foundation to focus exclusively on infrastructure. Then we have a handful of volunteers who work on different things.

Jan David Nose: Because our field of responsibility is so wide, the Infra Team works more in silos than other teams might. We have people who care deeply about very specific parts of the infrastructure. Otherwise there is simply too much to know for any one person. It has been a really nice mix, and it's amazing to work with the people on the team.

Jan David Nose: As someone who is privileged enough to work full-time on this and has the time and resources, we try to bear the bigger burden and create a space that is fun for volunteers to join. We want them to work on exciting things where there is less risk of something catching fire, where it's easier to come in, do a piece of work, and then step away. If your personal life takes over for two weeks, that's okay, because someone is there to make sure the servers and the lights stay on.

Jan David Nose: A lot of that work lives more on the maintainer side: the GitHub apps, the bots that help with triage. It's less risky if something goes wrong there. On the user side, if you push the wrong DNS setting, as someone might have done, you can end up in a situation where for 30 minutes no one can download crates. And in this case, "no one" literally means no user worldwide. That's not an experience I want volunteers to have. It's extremely stressful and was ultimately one of the reasons I joined in the first place—there was a real feeling of burnout from carrying that responsibility.

Jan David Nose: It's easier to carry that as a full-timer. We have more time and more ways to manage the stress. I'm honestly extremely amazed by what the Infra Team was able to do as volunteers. It's unbelievable what they built and how far they pushed Rust to get to where we are now.

Xander Cesari: I think anyone who's managing web traffic in 2025 is talking about traffic skyrocketing due to bots and scrapers for AI or other purposes. Has that hit the Rust network as well?

Jan David Nose: Yeah, we've definitely seen that. It's handled by a slightly different team, but on the docs.rs side in particular we've seen crawlers hit us hard from time to time, and that has caused noticeable service degradation. We're painfully aware of the increase in traffic that comes in short but very intense bursts when crawlers go wild.

Jan David Nose: That introduces a new challenge for our infrastructure. We need to figure out how to react to that traffic and protect our services from becoming unavailable to real users who want to use docs.rs to look up something for their work. On the CDN side, our providers can usually handle the traffic. It is more often the application side where things hurt.

Jan David Nose: On the CDN side we also see people crawling crates.io, presumably to vacuum up the entire crates ecosystem into an LLM. Fortunately, over the last two years we've done a lot of work to make sure crates.io as an application is less affected by these traffic spikes. Downloads now bypass crates.io entirely and go straight to the CDN, so the API is not hit by these bursts. In the past, this would have looked like a DDoS attack, with so many requests from so many sources that we couldn't handle it.

Jan David Nose: We've done a lot of backend work to keep our stack reliable, but it's definitely something that has changed the game over the last year. We can clearly see that crawlers are much more active than before.

Xander Cesari: That makes sense. I'm sure Fastly is working on this as well. Their business has to adapt to be robust to this new internet.

Jan David Nose: Exactly. For example, one of the conversations we're having right now is about docs.rs. It's still hosted on AWS behind CloudFront, but we're talking about putting it behind Fastly because through Fastly we get features like bot protection that can help keep crawlers out.

Jan David Nose: This is a good example of how our conversations have changed in the last six months. At the start of the year I did not think this would be a topic we would be discussing. We were focused on other things. For docs.rs we have long-term plans to rebuild the infrastructure that powers it, and I expected us to spend our energy there. But with the changes in the industry and everyone trying to accumulate as much data as possible, our priorities have shifted. The problems we face and the order in which we tackle them have changed.

Xander Cesari: And I assume as one of the few paid members of a mostly volunteer team, you often end up working on the fires, not the interesting next feature that might be more fun.

Jan David Nose: That is true, although it sounds a bit negative to say I only get to work on fires. Sometimes it feels like that because, as with any technology stack, there is a lot of maintenance overhead. We definitely pay that price on the infrastructure side.

Jan David Nose: Marco, for example, spent time this year going through all the servers we run, cataloging them, and making sure they're patched and on the latest operating system version. We updated our Ubuntu machines to the latest LTS. It feels a bit like busy work—you just have to do it because it's important and necessary, but it's not the most exciting project.

Jan David Nose: On the other hand, when it comes to things like CDN configuration and figuring out how bot protection features work and whether they are relevant to us, that is also genuinely interesting work. It lets us play with new tools vendors provide, and we're working on challenges that the wider industry is facing. How do you deal with this new kind of traffic? What are the implications of banning bots? How high is the risk of blocking real users? Sometimes someone just misconfigures a curl script, and from the outside it looks like they're crawling our site.

Jan David Nose: So it's an interesting field to work in, figuring out how we can use new features and address new challenges. That keeps it exciting even for us full-timers who do more of the "boring" work. We get to adapt alongside how the world around us is changing. If there's one constant, it's change.

Xander Cesari: Another ripped-from-the-headlines change around this topic is software supply chain security, and specifically xz-utils and the conversation around open source security. How much has that changed the landscape you work in?

Jan David Nose: The xz-utils compromise was scary. I don't want to call it a wake-up call, because we've been aware that supply chain security is a big issue and this was not the first compromise. But the way it happened felt very unsettling. You saw an actor spend a year and a half building social trust in an open source project and then using that to introduce a backdoor.

Jan David Nose: Thinking about that in the context of Rust: every team in the project talks about how we need more maintainers, how there's too much workload on the people who are currently contributing, and how Rust's growth puts strain on the organization as a whole. We want to be an open and welcoming project, and right now we also need to bring new people in. If someone shows up and says, "I'm willing to help, please onboard me," and they stick around for a year and then do something malicious, we would be susceptible to that. I don't think this is unique to Rust. This is an inherent problem in open source.

Xander Cesari: Yeah, it's antithetical to the culture.

Jan David Nose: Exactly. So we're trying to think through how we, as a project and as an ecosystem, deal with persistent threat actors who have the time and resources to play a long game. Paying someone to work full-time on open source for a year is a very different threat model than what we used to worry about.

Jan David Nose: I used to joke that the biggest threat to crates.io was me accidentally pulling the plug on a CDN. I think that has changed. Today the bigger threat is someone managing to insert malicious code into our releases, our supply chain, or crates.io itself. They could find ways to interfere with our systems in ways we're simply not prepared for, where, as a largely volunteer organization, we might be too slow to react to a new kind of attack.

Jan David Nose: Looking back over the last three years, this shift became very noticeable, especially after the first year. Traffic was doubling, Rust usage was going up a lot, and there were news stories about Rust being used in the Windows kernel, in Android, and in parts of iOS. Suddenly Rust is everywhere. If you want to attack "everywhere," going after Rust becomes attractive. That definitely puts a target on our back and has changed the game.

Jan David Nose: I'm very glad the Rust Foundation has a dedicated security engineer who has done a lot of threat modeling and worked with us on infrastructure security. There's also a lot of work happening specifically around the crates ecosystem and preventing supply chain attacks through crates. Luckily, it's not something the Infra side has to solve alone. But it is getting a lot more attention, and I think it will be one of the big challenges for the future: how a mostly volunteer-run project keeps up with this looming threat.

Xander Cesari: And it is the industry at large. This is not a unique problem to the Rust package manager. All package registries, from Python to JavaScript to Nix, deal with this. Is there an industry-wide conversation about how to help each other out and share learnings?

Jan David Nose: Yeah, there's definitely a lot happening. I have to smile a bit because, with a lot of empathy but also a bit of relief, we sometimes share news when another package ecosystem gets compromised. It is a reminder that it's not just us, sometimes it's npm this time.

Jan David Nose: We really try to stay aware of what's happening in the industry and in other ecosystems: what new threats or attack vectors are emerging, what others are struggling with. Sometimes that is security; sometimes it's usability. A year and a half ago, for example, npm had the "everything" package where someone declared every package on npm as a dependency, which blew up the index. We look at incidents like that and ask whether crates.io would struggle with something similar and whether we need to make changes.

Jan David Nose: On the security side we also follow closely what others are doing. In the packaging community, the different package managers are starting to come together more often to figure out which problems everyone shares. There is a bit of a joke that we're all just shipping files over the internet. Whether it's an npm package or a crate, ultimately it's a bunch of text files in a zip. So from an infrastructure perspective the problems are very similar.

Jan David Nose: These communities are now talking more about what problems PyPI has, what problems crates.io has, what is happening in the npm space. One thing every ecosystem has seen—even the very established ones—is a big increase in bandwidth needs, largely connected to the emergence of AI. PyPI, for example, publishes download charts, and it's striking. Python had steady growth—slightly exponential, but manageable—for many years. Then a year or two ago you see a massive hockey stick. People discovered that PyPI was a great distribution system for their models. There were no file size limits at the time, so you could publish precompiled GPU models there.

Jan David Nose: That pattern shows up everywhere. It has kicked off a new era for packaging ecosystems to come together and ask: in a time where open source is underfunded and traffic needs keep growing, how can we act together to find solutions to these shared problems? crates.io is part of those conversations. It's interesting to see how we, as an industry, share very similar problems across ecosystems—Python, npm, Rust, and others.

Xander Cesari: With a smaller, more hobbyist-focused community, you can have relaxed rules about what goes into your package manager. Everyone knows the spirit of what you're trying to do and you can get away without a lot of hard rules and consequences. Is the Rust world going to have to think about much harder rules around package sizes, allowed files, and how you're allowed to distribute things?

Jan David Nose: Funnily enough, we're coming at this from the opposite direction. Compared to other ecosystems, we've always had fairly strict limits. A crate can be at most around ten megabytes in size. There are limits on what kinds of files you can put in there. Ironically, those limits have helped us keep traffic manageable in this period.

Jan David Nose: At the same time, there is a valid argument that these limits may not serve all Rust use cases. There are situations where you might want to include something precompiled in your crate because it is hard to compile locally, takes a very long time, or depends on obscure headers no one has. I don't think we've reached the final state of what the crates.io package format should look like.

Jan David Nose: That has interesting security implications. When we talk about precompiled binaries or payloads, we all have that little voice in our head every time we see a curl | sh command: can I trust this? The same is true if you download a crate that contains a precompiled blob you cannot easily inspect.

Jan David Nose: The Rust Foundation is doing a lot of work and research here. My colleague Adam, who works on the crates.io team, is working behind the scenes to answer some of these questions. For example: what kind of security testing can we do before we publish crates to make sure they are secure and don't contain malicious payloads? How do we surface this information? How do we tell a publisher that they included files that are not allowed? And from the user's perspective, when you visit crates.io, how can you judge how well maintained and how secure a crate is?

Jan David Nose: Those conversations are happening quite broadly in the ecosystem. On the Infra side we're far down the chain. Ultimately we integrate with whatever security scanning infrastructure crates.io builds. We don't have to do the security research ourselves, but we do have to support it.

Jan David Nose: There's still a lot that needs to happen. As awesome as Rust already is, and as much as I love using it, it's important to remember that we're still a very young ecosystem. Python is now very mature and stable, but it's more than 25 years old. Rust is about ten years old as a stable language. We still have a lot to learn and figure out.

Xander Cesari: Is the Rust ecosystem running into problems earlier than other languages because we're succeeding at being foundational software and Rust is used in places that are even more security-critical than other languages, so you have to hit these hard problems earlier than the Python world did?

Jan David Nose: I think that's true. Other ecosystems probably had more time to mature and answer these questions. We're operating on a more condensed timeline. There is also simply more happening now. Open source has been very successful; it's everywhere. That means there are more places where security is critical.

Jan David Nose: So this comes with the success of open source, with what is happening in the ecosystem at large, and with the industry we're in. It does mean we have less time to figure some things out. On the flip side, we also have less baggage. We have less technical debt and fifteen fewer years of accumulated history. That lets us be on the forefront in some areas, like how a package ecosystem can stay secure and what infrastructure a 21st century open source project needs.

Jan David Nose: Here I really want to call out the Rust Foundation. They actively support this work: hiring people like Marco and me to work full-time on infrastructure, having Walter and Adam focus heavily on security, and as an organization taking supply chain considerations very seriously. The Foundation also works with other ecosystems so we can learn and grow together and build a better industry.

Jan David Nose: Behind the scenes, colleagues constantly work to open doors for us as a relatively young language, so we can be part of those conversations and sit at the table with other ecosystems. That lets us learn from what others have already gone through and also help shape where things are going. Sustainability is a big part of that: how do we fund the project long term? How do we make sure we have the human resources and financial resources to run the infrastructure and support maintainers? I definitely underestimated how much of my job would be relationship management and budget planning, making sure credits last until new ones arrive.

Xander Cesari: Most open core business models give away the thing that doesn't cost much—the software—and charge for the thing that scales with use—the service. In Rust's case, it's all free, which is excellent for adoption, but it must require a very creative perspective on the business side.

Jan David Nose: Yeah, and that's where different forces pull in opposite directions. As an open source project, we want everyone to be able to use Rust for free. We want great user experience. When we talk about downloads, there are ways for us to make them much cheaper, but that might mean hosting everything in a single geographic location. Then everyone, including people in Australia, would have to download from, say, Europe, and their experience would get much worse.

Jan David Nose: Instead, we want to use services that are more expensive but provide a better experience for Rust users. There's a real tension there. On one side we want to do the best we can; on the other side we need to be realistic that this costs money.

Xander Cesari: I had been thinking of infrastructure as a binary: it either works or it doesn't. But you're right, it's a slider. You can pick how much money you want to spend and what quality of service you get. Are there new technologies coming, either for the Rust Infra Team or the packaging world in general, to help with these security problems? New sandboxing technologies or higher-level support?

Jan David Nose: A lot of people are working on this problem from different angles. Internally we've talked a lot about it, especially in the context of Crater. Crater pulls in all of those crates to build them and get feedback from the Rust compiler. That means if someone publishes malicious code, we will download it and build it.

Jan David Nose: In Rust this is a particular challenge because build scripts can essentially do anything on your machine. For us that means we need strong sandboxing. We've built our own sandboxing framework so every crate build runs in an isolated container, which prevents malicious code from escaping and messing with the host systems.

Jan David Nose: We feel that pain in Crater, but if we can solve it in a way that isn't exclusive to Crater—if it also protects user machines from the same vulnerabilities—that would be ideal. People like Walter on the Foundation side are actively working on that. I'm sure there are conversations in the Cargo and crates teams as well, because every team that deals with packages sees a different angle of the problem. We all have to come together to solve it, and there is a lot of interesting work happening in that area.

Xander Cesari: I hope help is coming.

Jan David Nose: I'm optimistic.

Xander Cesari: We have this exponential curve with traffic and everything else. It seems like at some point it has to taper off.

Jan David Nose: We'll see. Rust is a young language. I don't know when that growth will slow down. I think there's a good argument that it will continue for quite a while as adoption grows.

Jan David Nose: Being at a conference like RustConf, it's exciting to see how the mix of companies has changed over time. We had a talk from Rivian on how they use Rust in their cars. We've heard from other car manufacturers exploring it. Rust is getting into more and more applications that a few years ago would have been hard to imagine or where the language simply wasn't mature enough yet.

Jan David Nose: As that continues, I think we'll see new waves of growth that sustain the exponential curve we currently have, because we're moving into domains that are new for us. It's amazing to see who is talking about Rust and how they're using it, sometimes in areas like space that you wouldn't expect.

Jan David Nose: I'm very optimistic about Rust's future. With this increase in adoption, we'll see a lot of interesting lessons about how to use Rust and a lot of creative ideas from people building with it. With more corporate adoption, I also expect a new wave of investment into the ecosystem: companies paying people to work full-time on different parts of Rust, both in the ecosystem and in the core project. I'm very curious what the next ten years will look like, because I genuinely don't know.

Xander Cesari: The state of Rust right now does feel a bit like the dog that caught the car and now doesn't know what to do with it.

Jan David Nose: Yeah, I think that's a good analogy. Suddenly we're in a situation where we realize we haven't fully thought through every consequence of success. It's fascinating to see how the challenges change every year. We keep running into new growing pains where something that wasn't an issue a year ago suddenly becomes one because growth keeps going up.

Jan David Nose: We're constantly rebuilding parts of our infrastructure to keep up with that growth, and I don't see that stopping soon. As a user, that makes me very excited. With the language and the ecosystem growing at this pace, there are going to be very interesting things coming that I can't predict today.

Jan David Nose: For the project, it also means there are real challenges: financing the infrastructure we need, finding maintainers and contributors, and creating a healthy environment where people can work without burning out. There is a lot of work to be done, but it's an exciting place to be.

Xander Cesari: Well, thank you for all your work keeping those magic Cargo commands I can type into my terminal just working in the background. If there's any call to action from this interview, it's that if you're a company using Rust, maybe think about donating to keep the Infra Team working.

Jan David Nose: We always love new Rust Foundation members. Especially if you're a company, that's one of the best ways to support the work we do. Membership gives us a budget we can use either to fund people who work full-time on the project or to fill gaps in our infrastructure sponsorship where we don't get services for free and have to pay real money.

Jan David Nose: And if you're not a company, we're always looking for people to help out. The Infra Team has a lot of Rust-based bots and other areas where people can contribute relatively easily.

Xander Cesari: Small scoped bots that you can wrap your head around and help out with.

Jan David Nose: Exactly. It is a bit harder on the Infra side because we can't give people access to our cloud infrastructure. There are areas where it's simply not possible to contribute as a volunteer because you can't have access to the production systems. But there is still plenty of other work that can be done.

Jan David Nose: Like every other team in the project, we're a bit short-staffed. So when you're at conferences, come talk to me or Marco. We have work to do.

Xander Cesari: Well, thank you for doing the work that keeps Rust running.

Jan David Nose: I'm happy to.

Xander Cesari: Awesome. Thank you so much.

Firefox NightlyGetting Better Every Day – These Weeks in Firefox: Issue 192

Highlights

  • Collapsed tab group hover preview is going live in Firefox 145!
    • A collapsed Firefox tab group is hovered, showing a dropdown listing three tabs in a group labeled “Firefox stuff!” The results include “Download Firefox for Desktop — from Mozilla,” “Firefox browser features — Firefox” (currently open), and “Firefox - Wikipedia.”
  • Nicolas Chevobbe added a feature that collapses unreferenced CSS variables declarations in the Rules view (#1719461)
    • The Firefox Developer Tools Style Rules view showing a list of CSS rules applied from multiple stylesheets, including activity-stream.css, tokens-brand.css, and tokens-shared.css. Each rule is shown with its selector, and links to the line numbers in their respective stylesheets. Some rules include expandable boxes with messages similar to “Show 45 unused custom CSS properties,” indicating detection of unused variables or properties.
  • Alexandre Poirot [:ochameau] added a setting to enable automatic pretty printing in the Debugger (#1994128)
    • The Firefox Developer Tools Debugger settings menu is expanded. The settings gear icon is selected, displaying options such as “Disable JavaScript,” “Inline Variable Preview,” “Wrap Lines,” “Source Maps,” “Hide Ignored Sources,” “Ignore Known Third-party Scripts,” “Show paused overlay,” and “Automatic pretty printing,” with several options checked, and the last one hovered. A tooltip at the bottom says, “All sources in the debugger will be automatically pretty printed.”
  • Improved performance on pages making heavy usage of CSS variables
    • A table comparing performance improvements in selecting the body element across four websites. The table has three columns: “Before (ms),” “After (ms),” and “%.” For hh.ru, the time improved from 3000 ms to 400 ms (−86.67%). For pinterest, 640 ms to 140 ms (−78.13%). For bulma, 820 ms to 250 ms (−69.51%). For youtube, 250 ms to 100 ms (−60%). All percentage improvements are shown in bold. The header row is shaded blue, and the first column cells are shaded green.
  • Jared H added a “copy this profile” button to the app menu (bug 1992199)
    • The Firefox profile management menu with three options: “New profile” with a plus icon, “Copy this profile” with a duplicate icon (hovered), and “Manage profiles.”

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Khalid AlHaddad
  • Kyler Riggs [:kylr]

New contributors (🌟 = first patch)

  • Alex Stout
  • Khalid AlHaddad
  • Jim Gong
  • Mason Abbruzzese
  • PhuongNam
  • Thomas J Faughnan Jr
  • Mingyuan Zhao [:MagentaManifold]

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed an issue that was preventing dynamic import from resolving moz-extensions ES modules when called from content scripts attached to sandboxed sub frames – Bug 1988419
    • Thanks to Yoshi Cheng-Hao Huang from the Spidermonkey Team for looking into and fixing this issue hitting dynamic imports usage from content scripts
Addon Manager & about:addons
  • As a followup to the work to improve the extensions button panel’s empty states, starting from Nightly 146 Firefox Desktop will be showing a message bar notice in both the extensions button panel and about:addons to highlight to the users when Firefox is running in Troubleshoot mode (also known as Safe mode) and all add-ons are expected to be disabled, along with a “Learn more link” pointing the user to the SUMO page describing Troubleshoot mode in more details – Bug 1992983 / Bug 1994074 / Bug 1727828
    • Firefox Extensions panel showing a message stating, “All extensions have been disabled by Troubleshoot Mode.” Below the message is an illustration of a fox peeking through a cityscape made of puzzle pieces. A message beneath the image says, “You have extensions installed, but not enabled. Select ‘Manage extensions’ to manage them in settings.” A “Manage extensions” link is displayed at the bottom.

DevTools

WebDriver

Lint, Docs and Workflow

  • ESLint
    • We are working on rolling out automatically fixable JSDoc rules across the whole tree. The aim being to reduce the amount of disabled rules in roll-outs, and make it simpler for enabling JSDDoc rules on new areas.
      • jsdoc/no-bad-blocks has now been enabled.
        • jsdoc comments are required to have two stars at the start, this will raise an issue if it looks like it should be a jsdoc comment (e.g. has an @ symbol) but only one star.
      • jsdoc/multiline-blocks has also been enabled.
        • This is being used mainly for layout consistency of multi-line comments, so that the text of the comment does not start on the first line, nor ends on the last line. This also helps with automatically fixing other rules.
  • StyleLint

Migration Improvements

New Tab Page

Performance Tools (aka Firefox Profiler)

  • Marker tooltips now have a ‘filter’ button to quickly filter the marker chart to similar markers:

Profile Management

  • Profiles is rolling out to all non-win10 users in 144, looking healthy so far
  • Niklas refactored the BackupService to support using it to copy profiles (bug 1992203)
  • Jared H added per-profile desktop shortcuts on Windows (bug 1958955), available via a toggle on the about:editprofile page
  • Dave fixed an intermittent test crash in debug builds (bug 1994849) caused by a race between deleting a directory and attempting to open a lock file. nsProfileLock::LockWithFcntl now returns a warning instead of an error in this case.

Search and Navigation

Storybook/Reusable Components/Acorn Design System

  • <moz-message-bar> now supports arbitrary content with slot=”message” elements
    • Ideally this is still something short, like a message as opposed to inputs, etc
    • <moz-message-bar><span slot=”message” data-l10n-id=”my-message”><a data-l10n-name=”link”></a></span></moz-message-bar>
    • Note: if you’re using Lit, @click listeners etc set on Fluent elements (data-l10n-name) won’t work, you’ll need to attach it to the data-l10n-id element or another parent

Niko MatsakisMove Expressions

This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.

TL;DR

The idea itself is simple, within a closure (or future), we add the option to write move($expr). This is a value expression (“rvalue”) that desugars into a temporary value that is moved into the closure. So

|| something(&move($expr))

is roughly equivalent to something like:

{ 
    let tmp = $expr;
    || something(&{tmp})
}

How it would look in practice

Let’s go back to one of our running examples, the “Cloudflare example”, which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today – note the let _some_value = ... lines for dealing with captures:

// task:  listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
  	do_something_else_with(_some_a, _some_b, _some_c)
});

Under this proposal it would look something like this:

tokio::task::spawn(async {
    do_something_else_with(
        move(self.some_a.clone()),
        move(self.some_b.clone()),
        move(self.some_c.clone()),
    )
});

There are times when you would want multiple clones. For example, if you want to move something into a FnMut closure that will then give away a copy on each call, it might look like

data_source_iter
    .inspect(|item| {
        inspect_item(item, move(tx.clone()).clone())
        //                      ----------  -------
        //                           |         |
        //                   move a clone      |
        //                   into the closure  |
        //                                     |
        //                             clone the clone
        //                             on each iteration
    })
    .collect();

// some code that uses `tx` later...

Credit for this idea

This idea is not mine. It’s been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it’s come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary’s proposal, like earlier proposals I’ve heard, used the super keyword. Later on @simulacrum proposed using move, which to me is a major improvement, and that’s the version I ran with here.

This proposal makes closures more “continuous”

The reason that I love the move variant of this proposal is that it makes closures more “continuous” and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach move closures at the end, as a convenient default:

A Rust closure captures the places you use in the “minimal way that it can” – so || vec.len() will capture a shared reference to the vec, || vec.push(22) will capture a mutable reference, and || drop(vec) will take ownership of the vector.

You can use move expressions to control exactly what is captured: so || move(vec).push(22) will move the vector into the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:

|| {
    let vec = move(input.vec); // take full ownership of vec
    let data = move(&cx.data); // take a reference to data
    let output_tx = move(output_tx); // take ownership of the output channel

    process(&vec, &mut output_tx, data)
}

As a shorthand, you can write move || at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match with move expressions to get more control. > So the previous closure might be written more concisely like so:

move || {
    process(&input.vec, &mut output_tx, move(&cx.data))
    //       ---------       ---------       --------      
    //           |               |               |         
    //           |               |       closure still  
    //           |               |       captures a ref
    //           |               |       `&cx.data`        
    //           |               |                         
    //       because of the `move` keyword on the clsoure,
    //       these two are captured "by move"
    //       
}

This proposal makes move “fit in” for me

It’s a bit ironic that I like this, because it’s doubling down on part of Rust’s design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:

To be honest, I don’t like the choice of move because it’s so operational. I think if I could go back, I would try to refashion our closures around two concepts

  • Attached closures (what we now call ||) would always be tied to the enclosing stack frame. They’d always have a lifetime even if they don’t capture anything.
  • Detached closures (what we now call move ||) would capture by-value, like move today.

I think this would help to build up the intuition of “use detach || if you are going to return the closure from the current stack frame and use || otherwise”.

move expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don’t have “ref closures” and “move closures” – you just have closures that sometimes capture moves, and a “move” closure is just a shorthand for using move expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it’s quite elegant.

Why not suffix?

One question is whether a move expression should be a prefix or a postfix operator. So e.g.

|| something(&$expr.move)

instead of &move($expr).

My feeling is that it’s not a good fit for a postfix operator because it doesn’t just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:

|| process(foo(bar()).move)

When does bar() get called? If you think about it, it has to be closure creation time, but it’s not very “obvious”.

We reached a similar conclusion when we were considering .unsafe operators. I think there is a rule of thumb that things which delineate a “scope” of code ought to be prefix – though I suspect unsafe(expr) might actually be nice, and not just unsafe { expr }.

Edit: I added this section after-the-fact in response to questions.

Conclusion

I’m going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust’s existing design. I love that. To me, it joins the set of “yep, we should clearly do that” pieces in this puzzle:

  • Add a Share trait (I’ve gone back to preferring the name share 😁)
  • Add move expressions

These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:

“low-level enough for a Kernel, usable enough for a GUI”

but they are moving in the right direction.

The Servo BlogServo Sponsorship Tiers

The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:

  • Platinum: 10,000 USD/month
  • Gold: 5,000 USD/month
  • Silver: 1,000 USD/month
  • Bronze: 100 USD/month

Organizations and individual sponsors donating in these tiers will be acknowledged on the servo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be “no strings attached” donations. All the information about these new tiers is available at the Sponsorship page on this website.

Please contact us at join@servo.org if you are interested in sponsoring the project through one of these tiers.

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187.

Last, but not least, we’re excited to welcome our first bronze sponsor LambdaTest who has recently started donating to the Servo project. Thank you very much!

Mozilla Localization (L10N)Localizer spotlight: Robb

About You

My profile in Pontoon is robbp, but I go by Robb. I’m based in Romania and have been contributing to Mozilla localization since 2018 — first between 2018 and 2020, and now again after a break. I work mainly on Firefox (desktop and mobile), Thunderbird, AMO, and SUMO. When I’m not volunteering for open-source projects, I work as a professional translator in Romanian, English, and Italian.

Getting Started

Q: How did you first get interested in localization? Do you remember how you got involved in Mozilla localization?

A: I’ve used Thunderbird for many years, and I never changed the welcome screen. I’d always see that invitation to contribute somehow.

Back in 2018, I was using freeware only — including Thunderbird — and I started feeling guilty that I wasn’t giving back. I tried donating, but online payments seemed shady back then, and I thought a small, one-time donation wouldn’t make a difference.

Around the same time, my mother kept asking questions like, “What is this trying to do on my phone? I think they’re asking me something, but it’s in English!” My generation learned English from TV, Cartoon Network, and software, but when the internet reached the older generation, I realized how big of a problem language barriers could be. I wasn’t even aware that there was such a big wave of localizing everything seen on the internet. I was used to having it all in English (operating system, browser, e-mail client, etc.).

After translating for my mom for a year, I thought, why not volunteer to localize, too? Mozilla products were the first choice — Thunderbird was “in my face” all day, all night, telling me to go and localize. I literally just clicked the button on Thunderbird’s welcome page — that’s where it all started.

I had also tried contributing to other open-source projects, but Mozilla’s Pontoon just felt more natural to me. The interface is very close to the CAT tools I am used to.

Your Localization Journey

Q: What do you do professionally? How does that experience influence your Mozilla work and motivate you to contribute to open-source localization?

A: I’ve been a professional translator since 2012. I work in English, Romanian, and Italian — so yes, I type all the time.

In Pontoon, I treat the work as any professional project. I check for quality, consistency, and tone — just like I would for a client.

I was never a writer. I love translating. That’s why I became a translator (professionally). And here… I actually got more feedback here than in my professional translation projects. I think that’s why I stayed for so long, that’s why I came back.

It is a change of scenery when I don’t localize professionally, a long way from the texts I usually deal with. This is where I unwind, where I translate for the joy of translation, where I find my translator freedom.

Q: At what moment did you realize that your work really mattered?

A: When my mom stopped asking me what buttons to click! Now she just uses her phone in Romanian. I can’t help but smile when I see that. It makes me think I’m a tiny little part of that confidence she has now.

Community & Collaboration

Q: Since your return, Romanian coverage has risen from below 70% to above 90%. You translate, review suggestions, and comment on other contributors’ work. What helps you stay consistent and motivated?

A: I set small goals — I like seeing the completion percentage climb. I celebrate every time I hit a milestone, even if it’s just with a cup of coffee.

I didn’t realize it was such a big deal until the localization team pointed it out. It’s hard to see the bigger picture when you work in isolation. But it’s the same motivation that got me started and brought me back — you just need to find what makes you hum.

Q: Do you conduct product testing after you localize the strings or do you test them by being an active user? 

A: I’m an active user of both Firefox and Thunderbird — I use them daily and quite intensely. I also have Firefox Nightly installed in Romanian, and I like to explore it to see what’s changed and where. But I’ll admit, I’m not as thorough as I should be! Our locale manager gives me a heads-up about things to check which helps me stay on top of updates. I need to admit that the testing part is done by the team manager. He is actively monitoring everything that goes on in Pontoon and checks how strings in Pontoon land in the products and to the end users.

Q: How do you collaborate with other contributors and support new ones?

A: I’m more of an independent worker, but in Pontoon, I wanted to use the work that was already done by the “veterans” and see how I could fit in. We had email conversations over terms, their collaboration, their contributions, personal likes and dislikes etc. I think they actually did me a favor with the email conversations, given I am not active on any channels or social media and email was my only way of talking to them.

This year I started leaving comments in Pontoon — it’s such an easy way to communicate directly on specific strings. Given I was limited to emails until now, I think comments will help me reach out to other members of the team and start collaborating with them, too.

I keep in touch with the Romanian managers by email or Telegram. One of them helps me with technical terms, he helped get the Firefox project to 100% before the deadline. He contacts me with information on how to use options (I didn’t know about) in Pontoon and ideas on wording (after he tests and reviews strings). Collaboration doesn’t always mean meetings; sometimes it’s quiet cooperation over time.

Mentoring is a big word, but I’m willing for the willing. If someone reaches out, I’ll always try to help.

Q: Have you noticed improvements in Pontoon since 2020? How does it compare to professional tools you use, and what features do you wish it had?

A: It’s fast — and I love that.

There’s no clutter — and that’s a huge plus. Some of the “much-tooted” professional tools are overloaded with features and menus that slow you down instead of helping. Pontoon keeps things simple and focused.

I also appreciate being able to see translations in other languages. I often check the French and Italian versions, just to compare terms.

The comments section is another great feature — it makes collaboration quick and to the point, perfect for discussing terms or string-specific questions. Machine translation has also improved a lot across the board, and Pontoon is keeping pace.

As for things that could be better — I’d love to try the pre-translation feature, but I’ve noticed that some imported strings confirm the wrong suggestion out of several options. That’s when a good translation-memory cleanup becomes necessary. It would be helpful if experienced contributors could trim the TM, removing obsolete or outdated terms so new contributors won’t accidentally use them.

Pontoon sometimes lags when I move too quickly through strings — like when approving matches or applying term changes across projects. And, unlike professional CAT tools, it doesn’t automatically detect repeated strings or propagate translations for identical text. That’s a small but noticeable gap compared to professional tools.

Personal Reflections

Q: Professional translators often don’t engage in open-source projects because their work is paid elsewhere. What could attract more translators — especially women — to contribute?

A: It’s tricky. Translation is a profession, not a hobby, and people need to make a living.

But for me, working on open-source projects is something different — a way to learn new things, use different tools, and have a different mindset. Maybe if more translators saw it as a creative outlet instead of extra work, they’d give it a try.

Involvement in open source is a personal choice. First, one has to hear about it, understand it, and realize that the software they use for free is made by people — then decide they want to be part of that.

I don’t think it’s a women’s thing. Many come and many go. Maybe it’s just the thrill at the beginning. Some try, but maybe translation is not for them…

Q: What does contributing to Mozilla mean to you today?

A: It’s my way of giving back — and of helping people like my mom, who just want to understand new technology without fear or confusion. That thought makes me smile every time I open Firefox or Thunderbird.

Q: Any final words…

A: I look forward to more blogs featuring fellow contributors and learning and being inspired from their personal stories.

The Mozilla BlogRewiring Mozilla: Doing for AI what we did for the web

The Mozilla logo in green on a black background

AI isn’t just another tech trend — it’s at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to create and collaborate and communicate. But AI is also letting us down, filling the internet with slop, creating huge social and economic risks — and further concentrating power over how tech works in the hands of a few.

This leaves us with a choice: push the trajectory of AI in a direction that’s good for humanity — or just let the slop pour out and the monopolies grow. For Mozilla, the choice is clear. We choose humanity. 

Mozilla has always been focused on making the internet a better place. Which is why pushing AI in a different direction than it’s currently headed is the core focus of our strategy right now. As AI becomes a fundamental component of everything digital — everything people build on the internet — it’s imperative that we step in to shape where it goes. 

This post is the first in a series that will lay out Mozilla’s evolving strategy to do for AI what we did for the web.

What did we do for the web? 

Twenty five years ago, Microsoft Internet Explorer had 95% browser market share — controlling how most people saw the internet, and who could build what and on what terms. Mozilla was born to change this. Firefox challenged Microsoft’s monopoly control of the web, and dropped Internet Explorer’s market share to 55% in just a few short years. 

The result was a very different internet. For most people, the internet was different because Firefox made it faster and richer — and blocked the annoying pop up ads that were pervasive at the time. It did even more for developers: Firefox was a rocketship for the growth of open standards and open source, decentralizing who controlled the technology used to build things on the internet. This ushered in the web 2.0 era. 

How did Mozilla do this? By building a non-profit tech company around the values in the Mozilla Manifesto — values like privacy, openness and trust. And by gathering a global community of tens  of thousands — a rebel alliance of sorts — to build an alternative to the big tech behemoth of the time. 

What does success look like? 

This is what we intend to do again: grow an alliance of people, communities, companies who envision — and want to build — a different future for AI.

What does ‘different’ look like? There are millions of good answers to this question. If your native tongue isn’t a major internet language like English or Chinese, it might be AI that has nuance in the language you speak. If you are a developer or a startup, it might be having open source AI building blocks that are affordable, flexible and let you truly own what you create. And if you are, well, anyone, it’s probably apps and services that become more useful and delightful as they add AI — and that are genuinely trustworthy and respectful of who we are as humans. The common threads: agency, diversity, choice. 

Our task is to create a future for AI that is built around these values. We’ve started to rewire Mozilla to take on this task — and developed a new strategy focused just as much on AI as it is on the web. At the heart of this strategy is a double bottom line framework — a way to measure our progress against both mission and money: 

Double bottom lineIn the worldIn Mozilla
MissionEmpower people with tech that promotes agency and choice – make AI for and about people. Build AI that puts humanity first
100% of Mozilla orgs building AI that advances the Mozilla Manifesto.
MoneyDecentralize the tech industry – and create an tech ecosystem where the ‘people part’ of AI can flourishRadically diversify our revenue. 20% yearly growth in non-search revenue. 3+ companies with $25M+ revenue.

Mozilla has always had an implicit double bottom line. The strategy we developed this year makes this double bottom line explicit — and ties it back to making AI more open and trustworthy. Over the next three years, all of the organizations in Mozilla’s portfolio will design their strategies — and measure their success — against this double bottom line. 

What will we build? 

As we’ve rewired Mozilla, we’ve not only laid out a new strategy — we have also brought in new leaders and expanded our portfolio of responsible tech companies. This puts us on a strong footing. The next step is the most important one: building new things — real technology and products and services that start to carve a different path for AI.

While it is still early days, all of the organizations across Mozilla are well underway with this piece of the puzzle. Each is focused on at least one of three areas of focus in our strategy:

Open source AI
— for developers
Public interest AI
— by and for communities
Trusted AI experiences
— for everyone 
Focus: grow a decentralized open source AI ecosystem that matches the capabilities of Big AI — and that enables people everywhere to build with AI on their own terms.Focus: work with communities everywhere to build technology that reflects their vision of who AI and tech should work, especially where the market won’t build it for them.Focus: create trusted AI-driven products that give people new ways to interact with the web — with user choice and openness as guiding principles.
Early examples: Mozilla.ai’s Choice First Stack, a unified open-source stack that simplifies building and testing modern AI agents. Also, llamafile for local AI.Early examples: the Mozilla Data Collective, home to Common Voice, which makes it possible to train and tune AI models in 300+ languages, accents and dialects. Early examples: recent Firefox AI experiments, which will evolve into AI Window in early 2026 — offering an opt-in way to choose models and add AI features in a browser you trust. 

The classic versions of Firefox and Thunderbird are still at the heart of what Mozilla does. These remain our biggest areas of investment — and neither of these products will force you to use AI. At the same time, you will see much more from Mozilla on the AI front in coming years. And, you will see us invest in other double bottom line companies trying to point AI in a better direction

We need to do this — together

These are the stakes: if we can’t push AI in a better direction, the internet — a place where 6 billion of us now spend much of our lives — will get much much worse. If we want to shape the future of the web and the internet, we also need to shape the future of AI. 

For Mozilla, whether or not to tackle this challenge isn’t a question anymore. We need to do this. The question is: how? The high level strategy that I’ve laid out is our answer. It doesn’t prescribe all the details — but it does give us a direction to point ourselves and our resources. Of course, we know there is still a HUGE amount to figure out as we build things — and we know that we can’t do this alone.

Which means it’s incredibly important to figure out: who can we walk beside? Who are our allies? The there is a growing community of people who believe the internet is alive and well — and who are dedicating themselves to bending the future of AI to keep it that way. They may not all use the same words or be building exactly the same thing, but a rebel alliance of sorts is gathering. Mozilla sees itself as part of this alliance. Our plan is to work with as many of you as possible. And to help the alliance grow — and win — just as we did in the web era. 

You can read the full strategy document here. Next up in this series: Building A LAMP Stack for AI. Followed by: A Double Bottom Line for Tech and The Mozilla Manifesto in the Era of AI

The post Rewiring Mozilla: Doing for AI what we did for the web appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Pro November 2025 Update

Welcome back to the latest update on our progress with Thunderbird Pro, a set of additional subscription services designed to enhance the email client you know, while providing a powerful open-source alternative to many of the big tech offerings available today. These services include Appointment, an easy to use scheduling tool; Send, which offers end-to-end encrypted file sharing; and Thundermail, an email service from the Thunderbird team. If you’d like more information on the broader details of each service and the road to getting here you can read our past series of updates here. Do you want to receive these and other updates and be the first to know when Thunderbird Pro is available? Be sure to sign up for the waitlist.

With that said, here’s how progress has shaped up on Thunderbird Pro since the last update.

Current Progress

Thundermail

It took a lot of work to get here, but Thundermail accounts are now in production testing. Internal testing with our own team members has begun, ensuring everything is in place for support and onboarding of the Early Bird wave of users. On the visual side, we’ve implemented improved designs for the new Thundermail dashboard, where users can view and edit their settings, including adding custom domains and aliases. 

The new Thunderbird Pro add-on now features support for Thundermail, which will allow future users who sign-up through the add-on to automatically add their Thundermail account in Thunderbird. Work to boost infrastructure and security has also continued, and we’ve migrated our data hosting from the Americas to Germany and the EU where possible. We’ve also been improving our email delivery to reduce the chances of Thundermail messages landing in spam folders.

Appointment

The team has been busy with design work, getting Zoom and CalDAV better integrated, and addressing workflow, infrastructure, and bugs. Appointment received a major visual update in the past few months, which is being applied across all of Thunderbird Pro. While some of these updates have already been implemented, there’s still lots of remodelling happening and under discussion – all in preparation for the Early Bird beta release.

Send

One of the main focuses for Send has been migrating it from its own add-on to the new Thunderbird Pro add-on, which will make using it in Thunderbird desktop much smoother. Progress continues on improving file safety through better reporting and prevention of illegal uploads. Our security review is now complete, with an external assessor validating all issues scheduled for fixing and once finalized, this report will be shared publicly with our community. Finally, we’ve refined the Send user experience by optimizing mobile performance, improving upload and download speeds, enhancing the first-time user flow, and much more.

Bringing it all together

Our new Thunderbird Pro website is now live, marking a major milestone in bringing the project to life. The website offers more details about Thunderbird Pro and serves as the first step for users to sign up, sign in and access their accounts. 


Our initial subscription tier, the Early Bird Plan, priced at $9 per month, will include all three services: Thundermail, Send, and Appointment. Email hosting, file storage, and the security behind all of this come at a cost, and Thunderbird Pro will never be funded by selling user data, showing ads, or compromising its independence. This introductory rate directly supports Thunderbird Pro’s early development and growth, positioning it for long-term sustainability. We will also be actively listening to your feedback and reviewing the pricing and plans we offer. Once the rough edges are smoothed out and we’re ready to open the doors to everyone, we plan to introduce additional tiers to better meet the needs of all our users.

What’s next

Thunderbird Pro is now awaiting its initial closed test run which will include a core group of community contributors. This group will help conduct a broader test and identify critical issues before we gradually open Early Bird access to our waitlist subscribers in waves. While these services will still be considered under active development, with your help this early release will continue to test and refine them for all future users.
Be sure you sign up for our Early Bird waitlist at tb.pro and help us shape the future of Thunderbird Pro. See you soon!

The post Thunderbird Pro November 2025 Update appeared first on The Thunderbird Blog.

The Rust Programming Language BlogSwitching to Rust's own mangling scheme on nightly

TL;DR: rustc will use its own "v0" mangling scheme by default on nightly versions instead of the previous default, which re-used C++'s mangling scheme, starting in nightly-2025-11-21

Context

When Rust is compiled into object files and binaries, each item (functions, statics, etc) must have a globally unique "symbol" identifying it.

In C, the symbol name of a function is just the name that the function was defined with, such as strcmp. This is straightforward and easy to understand, but requires that each item have a globally unique name that doesn't overlap with any symbols from libraries that it is linked against. If two items had the same symbol then when the linker tried to resolve a symbol to an address in memory (of a function, say), then it wouldn't know which symbol is the correct one.

Languages like Rust and C++ define "symbol mangling schemes", leveraging information from the type system to give each item a unique symbol name. Without this, it would be possible to produce clashing symbols in a variety of ways - for example, every instantiation of a generic or templated function (or an overload in C++), which all have the same name in the surface language would end up with clashing symbols; or the same name in different modules, such as a::foo and b::foo would have clashing symbols.

Rust originally used a symbol mangling scheme based on the Itanium ABI's name mangling scheme used by C++ (sometimes). Over the years, it was extended in an inconsistent and ad-hoc way to support Rust features that the mangling scheme wasn't originally designed for. Rust's current legacy mangling scheme has a number of drawbacks:

  • Information about generic parameter instantiations is lost during mangling
  • It is internally inconsistent - some paths use an Itanium ABI-style encoding but some don't
  • Symbol names can contain . characters which aren't supported on all platforms
  • Symbol names include an opaque hash which depends on compiler internals and can't be easily replicated by other compilers or tools
  • There is no straightforward way to differentiate between Rust and C++ symbols

If you've ever tried to use Rust with a debugger or a profiler and found it hard to work with because you couldn't work out which functions were which, it's probably because information was being lost in the mangling scheme.

Rust's compiler team started working on our own mangling scheme back in 2018 with RFC 2603 (see the "v0 Symbol Format" chapter in rustc book for our current documentation on the format). Our "v0" mangling scheme has multiple advantageous properties:

  • An unambiguous encoding for everything that can end up in a binary's symbol table
  • Information about generic parameters are encoded in a reversible way
  • Mangled symbols are decodable such that it should be possible to identify concrete instances of generic functions
  • It doesn't rely on compiler internals
  • Symbols are restricted to only A-Z, a-z, 0-9 and _, helping ensure compatibility with tools on varied platforms
  • It tries to stay efficient and avoid unnecessarily long names and computationally-expensive decoding

However, rustc is not the only tool that interacts with Rust symbol names: the aforementioned debuggers, profilers and other tools all need to be updated to understand Rust's v0 symbol mangling scheme so that Rust's users can continue to work with Rust binaries using all the tools they're used to without having to look at mangled symbols. Furthermore, all of those tools need to have new releases cut and then those releases need to be picked up by distros. This takes time!

Fortunately, the compiler team now believe that support for our v0 mangling scheme is now sufficiently widespread that it can start to be used by default by rustc.

Benefits

Reading Rust backtraces, or using Rust with debuggers, profilers and other tools that operate on compiled Rust code, will be able to output much more useful and readable names. This will especially help with async code, closures and generic functions.

It's easy to see the new mangling scheme in action, consider the following example:

fn foo<T>() {
    panic!()
}

fn main() {
    foo::<Vec<(String, &[u8; 123])>>();
}

With the legacy mangling scheme, all of the useful information about the generic instantiation of foo is lost in the symbol f::foo..

thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
  0: std::panicking::begin_panic
    at /rustc/d6c...582/library/std/src/panicking.rs:769:5
  1: f::foo
  2: f::main
  3: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

..but with the v0 mangling scheme, the useful details of the generic instantiation are preserved with f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>:

thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
  0: std::panicking::begin_panic
    at /rustc/d6c...582/library/std/src/panicking.rs:769:5
  1: f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>
  2: f::main
  3: <fn() as core::ops::function::FnOnce<()>>::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Possible drawbacks

Symbols using the v0 mangling scheme can be larger than symbols with the legacy mangling scheme, which can result in a slight increase in linking times and binary sizes if symbols aren't stripped (which they aren't by default). Fortunately this impact should be minor, especially with modern linkers like lld, which Rust will now default to on some targets.

Some old versions of tools/distros or niche tools that the compiler team are unaware of may not have had support for the v0 mangling scheme added. When using these tools, the only consequence is that users may encounter mangled symbols. rustfilt can be used to demangle Rust symbols if a tool does not.

In any case, using the new mangling scheme can be disabled if any problem occurs: use the -Csymbol-mangling-version=legacy -Zunstable-options flag to revert to using the legacy mangling scheme.

Explicitly enabling the legacy mangling scheme requires nightly, it is not intended to be stabilised so that support can eventually be removed.

Adding v0 support in your tools

If you maintain a tool that interacts with Rust symbols and does not support the v0 mangling scheme, there are Rust and C implementations of a v0 symbol demangler available in the rust-lang/rustc-demangle repository that can be integrated into your project.

Summary

rustc will use our "v0" mangling scheme on nightly for all targets starting in tomorrow's rustup nightly (nightly-2025-11-21).

Let us know if you encounter problems, by opening an issue on GitHub.

If that happens, you can use the legacy mangling scheme with the -Csymbol-mangling-version=legacy -Zunstable-options flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

[build]
rustflags = ["-Csymbol-mangling-version=legacy", "-Zunstable-options"]

If you like the sound of the new symbol mangling version and would like to start using it on stable or beta channels of Rust, then you can similarly use the -Csymbol-mangling-version=v0 flag today via RUSTFLAGS or .cargo/config.toml:

[build]
rustflags = ["-Csymbol-mangling-version=v0"]

Nick FitzgeraldA Function Inliner for Wasmtime and Cranelift

Note: I cross-posted this to the Bytecode Alliance blog.

Function inlining is one of the most important compiler optimizations, not because of its direct effects, but because of the follow-up optimizations it unlocks. It may reveal, for example, that an otherwise-unknown function parameter value is bound to a constant argument, which makes a conditional branch unconditional, which in turn exposes that the function will always return the same value. Inlining is the catalyst of modern compiler optimization.

Wasmtime is a WebAssembly runtime that focuses on safety and fast Wasm execution. But despite that focus on speed, Wasmtime has historically chosen not to perform inlining in its optimizing compiler backend, Cranelift. There were two reasons for this surprising decision: first, Cranelift is a per-function compiler designed such that Wasmtime can compile all of a Wasm module’s functions in parallel. Inlining is inter-procedural and requires synchronization between function compilations; that synchronization reduces parallelism. Second, Wasm modules are generally produced by an optimizing toolchain, like LLVM, that already did all the beneficial inlining. Any calls remaining in the module will not benefit from inlining — perhaps they are on slow paths marked [[unlikely]] or the callee is annotated with #[inline(never)]. But WebAssembly’s component model changes this calculus.

With the component model, developers can compose multiple Wasm modules — each produced by different toolchains — into a single program. Those toolchains only had a local view of the call graph, limited to their own module, and they couldn’t see cross-module or fused adapter function definitions. None of them, therefore, had an opportunity to inline calls to such functions. Only the Wasm runtime’s compiler, which has the final, complete call graph and function definitions in hand, has that opportunity.

Therefore we implemented function inlining in Wasmtime and Cranelift. Its initial implementation landed in Wasmtime version 36, however, it remains off-by-default and is still baking. You can test it out via the -C inlining=y command-line flag or the wasmtime::Config::compiler_inlining method. The rest of this article describes function inlining in more detail, digs into the guts of our implementation and rationale for its design choices, and finally looks at some early performance results.

Function Inlining

Function inlining is a compiler optimization where a call to a function f is replaced by a copy of f’s body. This removes function call overheads (spilling caller-save registers, setting up the call frame, etc…) which can be beneficial on its own. But inlining’s main benefits are indirect: it enables subsequent optimization of f’s body in the context of the call site. That context is important — a parameter’s previously unknown value might be bound to a constant argument and exposing that to the optimizer might cascade into a large code clean up.

Consider the following example, where function g calls function f:

fn f(x: u32) -> bool {
    return x < u32::MAX / 2;
}

fn g() -> u32 {
    let a = 42;
    if f(a) {
        return a;
    } else {
        return 0;
    }
}

After inlining the call to f, function g looks something like this:

fn g() -> u32 {
    let a = 42;

    let x = a;
    let f_result = x < u32::MAX / 2;

    if f_result {
        return a;
    } else {
        return 0;
    }
}

Now the whole subexpression that defines f_result only depends on constant values, so the optimizer can replace that subexpression with its known value:

fn g() -> u32 {
    let a = 42;

    let f_result = true;
    if f_result {
        return a;
    } else {
        return 0;
    }
}

This reveals that the if-else conditional will, in fact, unconditionally transfer control to the consequent, and g can be simplified into the following:

fn g() -> u32 {
    let a = 42;
    return a;
}

In isolation, inlining f was a marginal transformation. When considered holistically, however, it unlocked a plethora of subsequent simplifications that ultimately led to g returning a constant value rather than computing anything at run-time.

Implementation

Cranelift’s unit of compilation is a single function, which Wasmtime leverages to compile each function in a Wasm module in parallel, speeding up compile times on multi-core systems. But inlining a function at a particular call site requires that function’s definition, which implies parallelism-hurting synchronization or some other compromise, like additional read-only copies of function bodies. So this was the first goal of our implementation: to preserve as much parallelism as possible.

Additionally, although Cranelift is primarily developed for Wasmtime by Wasmtime’s developers, it is independent from Wasmtime. It is a reusable library and is reused, for example, by the Rust project as an alternative backend for rustc. But a large part of inlining, in practice, are the heuristics for deciding when inlining a call is likely beneficial, and those heuristics can be domain specific. Wasmtime generally wants to leave most calls out-of-line, inlining only cross-module calls, while rustc wants something much more aggressive to boil away its Iterator combinators and the like. So our second implementation goal was to separate how we inline a function call from the decision of whether to inline that call.

These goals led us to a layered design where Cranelift has an optional inlining pass, but the Cranelift embedder (e.g. Wasmtime) must provide a callback to it. The inlining pass invokes the callback for each call site, the callback returns a command of either “leave the call as-is” or “here is a function body, replace the call with it”. Cranelift is responsible for the inlining transformation and the embedder is responsible for deciding whether to inline a function call and, if so, getting that function’s body (along with whatever synchronization that requires).

The mechanics of the inlining transformation — wiring arguments to parameters, renaming values, and copying instructions and basic blocks into the caller — are, well, mechanical. Cranelift makes extensive uses of arenas for various entities in its IR, and we begin by appending the callee’s arenas to the caller’s arenas, renaming entity references from the callee’s arena indices to their new indices in the caller’s arenas as we do so. Next we copy the callee’s block layout into the caller and replace the original call instruction with a jump to the caller’s inlined version of the callee’s entry block. Cranelift uses block parameters, rather than phi nodes, so the call arguments simply become jump arguments. Finally, we translate each instruction from the callee into the caller. This is done via a pre-order traversal to ensure that we process value definitions before value uses, simplifying instruction operand rewriting. The changes to Wasmtime’s compilation orchestration are more interesting.

The following pseudocode describes Wasmtime’s compilation orchestration before Cranelift gained an inlining pass and also when inlining is disabled:

// Compile each function in parallel.
let objects = parallel map for func in wasm.functions {
    compile(func)
};

// Combine the functions into one region of executable memory, resolving
// relocations by mapping function references to PC-relative offsets.
return link(objects)

The naive way to update that process to use Cranelift’s inlining pass might look something like this:

// Optionally perform some pre-inlining optimizations in parallel.
parallel for func in wasm.functions {
    pre_optimize(func);
}

// Do inlining sequentially.
for func in wasm.functions {
    func.inline(|f| if should_inline(f) {
        Some(wasm.functions[f])
    } else {
        None
    })
}

// And then proceed as before.
let objects = parallel map for func in wasm.functions {
    compile(func)
};
return link(objects)

Inlining is performed sequentially, rather than in parallel, which is a bummer. But if we tried to make that loop parallel by logically running each function’s inlining pass in its own thread, then a callee function we are inlining might or might not have had its transitive function calls inlined already depending on the whims of the scheduler. That leads to non-deterministic output, and our compilation must be deterministic, so it’s a non-starter.1 But whether a function has already had transitive inlining done or not leads to another problem.

With this naive approach, we are either limited to one layer of inlining or else potentially duplicating inlining effort, repeatedly inlining e into f each time we inline f into g, h, and i. This is because f may come before or after g in our wasm.functions list. We would prefer it if f already contained e and was already optimized accordingly, so that every caller of f didn’t have to redo that same work when inlining calls to f.

This suggests we should topologically sort our functions based on their call graph, so that we inline in a bottom-up manner, from leaf functions (those that do not call any others) towards root functions (those that are not called by any others, typically main and other top-level exported functions). Given a topological sort, we know that whenever we are inlining f into g either (a) f has already had its own inlining done or (b) f and g participate in a cycle. Case (a) is ideal: we aren’t repeating any work because it’s already been done. Case (b), when we find cycles, means that f and g are mutually recursive. We cannot fully inline recursive calls in general (just as you cannot fully unroll a loop in general) so we will simply avoid inlining these calls.2 So topological sort avoids repeating work, but our inlining phase is still sequential.

At the heart of our proposed topological sort is a call graph traversal that visits callees before callers. To parallelize inlining, you could imagine that, while traversing the call graph, we track how many still-uninlined callees each caller function has. Then we batch all functions whose associated counts are currently zero (i.e. they aren’t waiting on anything else to be inlined first) into a layer and process them in parallel. Next, we decrement each of their callers’ counts and collect the next layer of ready-to-go functions, continuing until all functions have been processed.

let call_graph = CallGraph::new(wasm.functions);

let counts = { f: call_graph.num_callees_of(f) for f in wasm.functions };

let layer = [ f for f in wasm.functions if counts[f] == 0 ];
while layer is not empty {
    parallel for func in layer {
        func.inline(...);
    }

    let next_layer = [];
    for func in layer {
        for caller in call_graph.callers_of(func) {
            counts[caller] -= 1;
            if counts[caller] == 0 {
                next_layer.push(caller)
            }
        }
    }
    layer = next_layer;
}

This algorithm will leverage available parallelism, and it avoids repeating work via the same dependency-based scheduling that topological sorting did, but it has a flaw. It will not terminate when it encounters recursion cycles in the call graph. If function f calls function g which also calls f, for example, then it will not schedule either of them into a layer because they are both waiting for the other to be processed first. One way we can avoid this problem is by avoiding cycles.

If you partition a graph’s nodes into disjoint sets, where each set contains every node reachable from every other node in that set, you get that graph’s strongly-connected components (SCCs). If a node does not participate in a cycle, then it will be in its own singleton SCC. The members of a cycle, on the other hand, will all be grouped into the same SCC, since those nodes are all reachable from each other.

In the following example, the dotted boxes designate the graph’s SCCs:

Ignoring edges between nodes within the same SCC, and only considering edges across SCCs, gives us the graph’s condensation. The condensation is always acyclic, because the original graph’s cycles are “hidden” within the SCCs.

Here is the condensation of the previous example:

We can adapt our parallel-inlining algorithm to operate on strongly-connected components, and now it will correctly terminate because we’ve removed all cycles. First, we find the call graph’s SCCs and create the reverse (or transpose) condensation, where an edge a→b is flipped to b→a. We do this because we will query this graph to find the callers of a given function f, not the functions that f calls. I am not aware of an existing name for the reverse condensation, so, at Chris Fallin’s brilliant suggestion, I have decided to call it an evaporation. From there, the algorithm largely remains as it was before, although we keep track of counts and layers by SCC rather than by function.

let call_graph = CallGraph::new(wasm.functions);
let components = StronglyConnectedComponents::new(call_graph);
let evaoporation = Evaporation::new(components);

let counts = { c: evaporation.num_callees_of(c) for c in components };

let layer = [ c for c in components if counts[c] == 0 ];
while layer is not empty {
    parallel for func in scc in layer {
        func.inline(...);
    }

    let next_layer = [];
    for scc in layer {
        for caller_scc in evaporation.callers_of(scc) {
            counts[caller_scc] -= 1;
            if counts[caller_scc] == 0 {
                next_layer.push(caller_scc);
            }
        }
    }
    layer = next_layer;
}

This is the algorithm we use in Wasmtime, modulo minor tweaks here and there to engineer some data structures and combine some loops. After parallel inlining, the rest of the compiler pipeline continues in parallel for each function, yielding unlinked machine code. Finally, we link all that together and resolve relocations, same as we did previously.

Heuristics are the only implementation detail left to discuss, but there isn’t much to say that hasn’t already been said. Wasmtime prefers not to inline calls within the same Wasm module, while cross-module calls are a strong hint that we should consider inlining. Beyond that, our heuristics are extremely naive at the moment, and only consider the code sizes of the caller and callee functions. There is a lot of room for improvement here, and we intend to make those improvements on-demand as people start playing with the inliner. For example, there are many things we don’t consider in our heuristics today, but possibly should:

  • Hints from WebAssembly’s compilation-hints proposal
  • The number of edges to a callee function in the call graph
  • Whether any of a call’s arguments are constants
  • Whether the call is inside a loop or a block marked as “cold”
  • Etc…

Some Initial Results

The speed up you get (or don’t get) from enabling inlining is going to vary from program to program. Here are a couple synthetic benchmarks.

First, let’s investigate the simplest case possible, a cross-module call of an empty function in a loop:

(component
  ;; Define one module, exporting an empty function `f`.
  (core module $M
    (func (export "f")
      nop
    )
  )

  ;; Define another module, importing `f`, and exporting a function
  ;; that calls `f` in a loop.
  (core module $N
    (import "m" "f" (func $f))
    (func (export "g") (param $counter i32)
      (loop $loop
        ;; When counter is zero, return.
        (if (i32.eq (local.get $counter) (i32.const 0))
          (then (return)))
        ;; Do our cross-module call.
        (call $f)
        ;; Decrement the counter and continue to the next iteration
        ;; of the loop.
        (local.set $counter (i32.sub (local.get $counter)
                                     (i32.const 1)))
        (br $loop))
    )
  )

  ;; Instantiate and link our modules.
  (core instance $m (instantiate $M))
  (core instance $n (instantiate $N (with "m" (instance $m))))

  ;; Lift and export the looping function.
  (func (export "g") (param "n" u32)
    (canon lift (core func $n "g"))
  )
)

We can inspect the machine code that this compiles down to via the wasmtime compile and wasmtime objdump commands. Let’s focus only on the looping function. Without inlining, we see a loop around a call, as we would expect:

00000020 wasm[1]::function[1]:
        ;; Function prologue.
        20: pushq   %rbp
        21: movq    %rsp, %rbp

        ;; Check for stack overflow.
        24: movq    8(%rdi), %r10
        28: movq    0x10(%r10), %r10
        2c: addq    $0x30, %r10
        30: cmpq    %rsp, %r10
        33: ja      0x89

        ;; Allocate this function's stack frame, save callee-save
        ;; registers, and shuffle some registers.
        39: subq    $0x20, %rsp
        3d: movq    %rbx, (%rsp)
        41: movq    %r14, 8(%rsp)
        46: movq    %r15, 0x10(%rsp)
        4b: movq    0x40(%rdi), %rbx
        4f: movq    %rdi, %r15
        52: movq    %rdx, %r14

        ;; Begin loop.
        ;;
        ;; Test our counter for zero and break out if so.
        55: testl   %r14d, %r14d
        58: je      0x72
        ;; Do our cross-module call.
        5e: movq    %r15, %rsi
        61: movq    %rbx, %rdi
        64: callq   0
        ;; Decrement our counter.
        69: subl    $1, %r14d
        ;; Continue to the next iteration of the loop.
        6d: jmp     0x55

        ;; Function epilogue: restore callee-save registers and
        ;; deallocate this functions's stack frame.
        72: movq    (%rsp), %rbx
        76: movq    8(%rsp), %r14
        7b: movq    0x10(%rsp), %r15
        80: addq    $0x20, %rsp
        84: movq    %rbp, %rsp
        87: popq    %rbp
        88: retq

        ;; Out-of-line traps.
        89: ud2
            ╰─╼ trap: StackOverflow

When we enable inlining, then M::f gets inlined into N::g. Despite N::g becoming a leaf function, we will still push %rbp and all that in the prologue and pop it in the epilogue, because Wasmtime always enables frame pointers. But because it no longer needs to shuffle values into ABI argument registers or allocate any stack space, it doesn’t need to do any explicit stack checks, and nearly all the rest of the code also goes away. All that is left is a loop decrementing a counter to zero:3

00000020 wasm[1]::function[1]:
        ;; Function prologue.
        20: pushq   %rbp
        21: movq    %rsp, %rbp

        ;; Loop.
        24: testl   %edx, %edx
        26: je      0x34
        2c: subl    $1, %edx
        2f: jmp     0x24

        ;; Function epilogue.
        34: movq    %rbp, %rsp
        37: popq    %rbp
        38: retq

With this simplest of examples, we can just count the difference in number of instructions in each loop body:

  • 12 without inlining (7 in N::g and 5 in M::f which are 2 to push the frame pointer, 2 to pop it, and 1 to return)
  • 4 with inlining

But we might as well verify that the inlined version really is faster via some quick-and-dirty benchmarking with hyperfine. This won’t measure only Wasm execution time, it also measures spawning a whole Wasmtime process, loading code from disk, etc…, but it will work for our purposes if we crank up the number of iterations:

$ hyperfine \
    "wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm" \
    "wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm"

Benchmark 1: wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm
  Time (mean ± σ):     138.2 ms ±   9.6 ms    [User: 132.7 ms, System: 6.7 ms]
  Range (min … max):   128.7 ms … 167.7 ms    19 runs

Benchmark 2: wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm
  Time (mean ± σ):      37.5 ms ±   1.1 ms    [User: 33.0 ms, System: 5.8 ms]
  Range (min … max):    35.7 ms …  40.8 ms    77 runs

Summary
  'wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm' ran
    3.69 ± 0.28 times faster than 'wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm'

Okay so if we measure Wasm doing almost nothing but empty function calls and then we measure again after removing function call overhead, we get a big speed up — it would be disappointing if we didn’t! But maybe we can benchmark something a tiny bit more realistic.

A program that we commonly reach for when benchmarking is a small wrapper around the pulldown-cmark markdown library that parses the CommonMark specification (which is itself written in markdown) and renders that to HTML. This is Real World™ code operating on Real World™ inputs that matches Real World™ use cases people have for Wasm. That is, good benchmarking is incredibly difficult, but this program is nonetheless a pretty good candidate for inclusion in our corpus. There’s just one hiccup: in order for our inliner to activate normally, we need a program using components and making cross-module calls, and this program doesn’t do that. But we don’t have a good corpus of such benchmarks yet because this kind of component composition is still relatively new, so let’s keep using our pulldown-cmark program but measure our inliner’s effects via a more circuitous route.

Wasmtime has tunables to enable the inlining of intra-module calls4 and rustc and LLVM have tunables for disabling inlining5. Therefore we can roughly estimate the speed ups our inliner might unlock on a similar, but extensively componentized and cross-module calling, program by:

  • Disabling inlining when compiling the Rust source code to Wasm

  • Compiling the resulting Wasm binary to native code with Wasmtime twice: once with inlining disabled, and once with intra-module call inlining enabled

  • Comparing those two different compilations’ execution speeds

Running this experiment with Sightglass, our internal benchmarking infrastructure and tooling, yields the following results:

execution :: instructions-retired :: pulldown-cmark.wasm

  Δ = 7329995.35 ± 2.47 (confidence = 99%)

  with-inlining is 1.26x to 1.26x faster than without-inlining!

  [35729153 35729164.72 35729173] without-inlining
  [28399156 28399169.37 28399179] with-inlining

Conclusion

Wasmtime and Cranelift now have a function inliner! Test it out via the -C inlining=y command-line flag or via the wasmtime::Config::compiler_inlining method. Let us know if you run into any bugs or whether you see any speed-ups when running Wasm components containing multiple core modules.

Thanks to Chris Fallin and Graydon Hoare for reading early drafts of this piece and providing valuable feedback. Any errors that remain are my own.


  1. Deterministic compilation gives a number of benefits: testing is easier, debugging is easier, builds can be byte-for-byte reproducible, it is well-behaved in the face of incremental compilation and fine-grained caching, etc… 

  2. For what it is worth, this still allows collapsing chains of mutually-recursive calls (a calls b calls c calls a) into a single, self-recursive call (abc calls abc). Our actual implementation does not do this in practice, preferring additional parallelism instead, but it could in theory. 

  3. Cranelift cannot currently remove loops without side effects, and generally doesn’t mess with control-flow at all in its mid-end. We’ve had various discussions about how we might best fit control-flow-y optimizations into Cranelift’s mid-end architecture over the years, but it also isn’t something that we’ve seen would be very beneficial for actual, Real World™ Wasm programs, given that (a) LLVM has already done much of this kind of thing when producing the Wasm, and (b) we do some branch-folding when lowering from our mid-level IR to our machine-specific IR. Maybe we will revisit this sometime in the future if it crops up more often after inlining. 

  4. -C cranelift-wasmtime-inlining-intra-module=yes 

  5. -Cllvm-args=--inline-threshold=0, -Cllvm-args=--inlinehint-threshold=0, and -Zinline-mir=no 

This Week In RustThis Week in Rust 626

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is cargo cat, a cargo-subcommand to put a random ascii cat face on your terminal.

Thanks to Alejandra Gonzáles for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

427 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Positive week, most notably because of the new format_args!() and fmt::Arguments implementation from #148789. Another notable improvement came from moving some computations from one compiler stage to another to save memory and unnecessary tree traversals in #148706

Triage done by @panstromek. Revision range: 055d0d6a..6159a440

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
1.6% [0.2%, 5.6%] 11
Regressions ❌
(secondary)
0.3% [0.1%, 1.1%] 26
Improvements ✅
(primary)
-0.8% [-4.5%, -0.1%] 161
Improvements ✅
(secondary)
-1.4% [-38.1%, -0.1%] 168
All ❌✅ (primary) -0.6% [-4.5%, 5.6%] 172

2 Regressions, 4 Improvements, 10 Mixed; 4 of them in rollups 48 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-11-19 - 2025-12-17 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

We adopted Rust for its security and are seeing a 1000x reduction in memory safety vulnerability density compared to Android’s C and C++ code. But the biggest surprise was Rust's impact on software delivery. With Rust changes having a 4x lower rollback rate and spending 25% less time in code review, the safer path is now also the faster one.

Jeff Vander Stoep on the Google Android blog

Thanks to binarycat for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogProject goals update — October 2025

The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

"Beyond the `&`"
Progress
Point of contact

Frank King

Champions

compiler (Oliver Scherer), lang (TC)

Task owners

Frank King

1 detailed update available.

Comment by @frank-king posted on 2025-10-22:

Status update:

Regarding the TODO list in the next 6 months, here is the current status:

Introduce &pin mut|const place borrowing syntax

  • [x] parsing: #135731, merged.
  • [ ] lowering and borrowck: not started yet.

I've got some primitive ideas about borrowck, and I probably need to confirm with someone who is familiar with MIR/borrowck before starting to implement.

A pinned borrow consists two MIR statements:

  1. a borrow statement that creates the mutable reference,
  2. and an adt aggregate statement that put the mutable reference into the Pin struct.

I may have to add a new borrow kind so that pinned borrows can be recognized. Then traverse the dataflow graph to make sure that pinned places cannot been moved.

Pattern matching of &pin mut|const T types

In the past few months, I have struggled with the !Unpin stuffs (the original design sketch Alternative A), trying implementing it, refactoring, discussing on zulips, and was constantly confused; luckily, we have finally reached a new agreement of the Alternative B version.

  • [ ] #139751 under review (reimplemented regarding Alternative B).

Support drop(&pin mut self) for structurally pinned types

  • [ ] adding a new Drop::pin_drop(&pin mut self) method: draft PR #144537

Supporting both Drop::drop(&mut self) and Drop::drop(&pin mut self) seems to introduce method-overloading to Rust, which I think might need some more general ways to handle (maybe by a rustc attribute?). So instead, I'd like to implemenent this via a new method Drop::pin_drop(&pin mut self) first.

Introduce &pin pat pattern syntax

Not started yet (I'd prefer doing that when pattern matching of &pin mut|const T types is ready).

Support &pin mut|const T -> &|&mut T coercion (requires T: Unpin of &pin mut T -> &mut T)

Not started yet. (It's quite independent, probably someone else can help with it)

Support auto borrowing of &pin mut|const place in method calls with &pin mut|const self receivers

Seems to be handled by Autoreborrow traits?

Design a language feature to solve Field Projections (rust-lang/rust-project-goals#390)
Progress
Point of contact

Benno Lossin

Champions

lang (Tyler Mandry)

Task owners

Benno Lossin

TL;DR.

There have been lots of internal developments since the last update:

Next Steps:

  • we're still planning to merge https://github.com/rust-lang/rust/pull/146307, after I have updated it with the new FRT logic and it has been reviewed
  • once that PR lands, I plan to update the library experiment to use the experimental FRTs
  • then the testing using that library can begin in the Linux kernel and other projects (this is where anyone interested in trying field projections can help out!)

4 detailed updates available.

Comment by @BennoLossin posted on 2025-10-23:
Decomposing Projections

A chained projection operation should naturally decompose, so foo.[Ber Clausen][].[Baz Shkara][] should be the same as writing (foo.[Ber Clausen][]).[Baz Shkara][]. Until now, the different parenthesizing would have allowed different outcomes. This behavior is confusing and also makes many implementation details more complicated than they need to be.

Field Representing Types

Since projections now decompose, we have no need from a design perspective for multi-level FRTs. So field_of!(Foo, bar.baz) is no longer required to work. Thus we have decided to restrict FRTs to only a single field and get rid of the path. This simplifies the implementation in the compiler and also avoids certain difficult questions such as the locality of FRTs (if we had a path, we would have to walk the path and it is local, if all structs included in the path are local). Now with only a single field, the FRT is local if the struct is.

We also discovered that it is a good idea to make FRTs inhabited (they still are ZSTs), since then it allows the following pattern to work:

fn project_free_standing<F: Field>(_: Field, r: &F::Base) -> &F::Type { ... }
// can now call the function without turbofish: let my_field = project_free_standing(field_of!(MyStruct, my_field), &my_struct);
FRTs via const Generics

We also spent some time thinking about const generics and FRTs on zulip:

In short, this won't be happening any time soon. However, it could be a future implementation of the field_of! macro depending on how reflection through const generics evolves (but also only in the far-ish future).

Comment by @BennoLossin posted on 2025-10-23:
Single Project Operator & Trait via Exclusive Decay

It would be great if we only had to add a single operator and trait and could obtain the same features as we have with two. The current reason for having two operators is to allow both shared and exclusive projections. If we could have another operation that decays an exclusive reference (or custom, exclusive smart-pointer type) into a shared reference (or the custom, shared version of the smart pointer). This decay operation would need borrow checker support in order to have simultaneous projections of one field exclusively and another field shared (and possibly multiple times).

This goes into a similar direction as the reborrowing project goal https://github.com/rust-lang/rust-project-goals/issues/399, however, it needs extra borrow checker support.

fn add(x: cell::RefMut<'_, i32>, step: i32) {
    *x = *x + step;
}
struct Point { x: i32, y: i32, }
fn example(p: cell::RefMut<'_, Point>) { let y: cell::Ref<'_, i32> = coerce_shared!(p.[@y][]); let y2 = coerce_shared!(p.[@y][]); // can project twice if both are coerced add(p.[Devon Peticolas][], *y); add(p.[Devon Peticolas][], *y2); assert_eq!(*y, *y2); // can still use them afterwards }

Problems:

  • explicit syntax is annoying for these "coercions", but
  • we cannot make this implicit:
    • if this were an implicit operation, only the borrow checker would know when one had to coerce,
    • this operation is allowed to change the type,
    • this results in borrow check backfeeding into typecheck, which is not possible or at least extremely difficult
Syntax

Not much movement here, it depends on the question discussed in the previous section, since if we only have one operator, we could choose .@, -> or ~; if we have to have two, then we need additional syntax to differentiate them.

Comment by @BennoLossin posted on 2025-10-23:
Simplifying the Project trait

There have been some developments in pin ergonomics https://github.com/rust-lang/rust/issues/130494: "alternative B" is now the main approach which means that Pin<&mut T> has linear projections, which means that it doesn't change its output type depending on the concrete field (really depending on the field, not only its type). So it falls into the general projection pattern Pin<&mut Struct> -> Pin<&mut Field> which means that Pin doesn't need any where clauses when implementing Project.

Additionally we have found out that RCU also doesn't need where clauses, as we can also make its projections linear by introducing a MutexRef<'_, T> smart pointer that always allows projections and only has special behavior for T = Rcu<U>. Discussed on zulip after this message.

For this reason we can get rid of the generic argument to Project and mandate that all types that support projections support them for all fields. So the new Project trait looks like this:

// still need a common super trait for `Project` & `ProjectMut`
pub trait Projectable {
    type Target: ?Sized;
}
pub unsafe trait Project: Projectable { type Output<F: Field<Base = Self::Target>>;
unsafe fn project<F: Field<Base = Self::Target>>( this: *const Self, ) -> Self::Output<F>; }
Are FRTs even necessary?

With this change we can also think about getting rid of FRTs entirely. For example we could have the following Project trait:

pub unsafe trait Project: Projectable {
    type Output<F>;
unsafe fn project<const OFFSET: usize, F>( this: *const Self, ) -> Self::Output<F>; }

There are other applications for FRTs that are very useful for Rust-for-Linux. For example, storing field information for intrusive data structures directly in that structure as a generic.

More concretely, in the kernel there are workqueues that allow you to run code in parallel to the currently running thread. In order to insert an item into a workqueue, an intrusive linked list is used. However, we need to be able to insert the same item into multiple lists. This is done by storing multiple instances of the Work struct. Its definition is:

pub struct Work<T, const ID: u64> { ... }

Where the ID generic must be unique inside of the struct.

struct MyDriver {
    data: Arc<MyData>,
    main_work: Work<Self, 0>,
    aux_work: Work<Self, 1>,
    // more fields ...
}
// Then you call a macro to implement the unsafe `HasWork` trait safely. // It asserts that there is a field of type `Work<MyDriver, 0>` at the given field // (and also exposes its offset). impl_has_work!(impl HasWork<MyDriver, 0> for MyDriver { self.main_work }); impl_has_work!(impl HasWork<MyDriver, 1> for MyDriver { self.aux_work });
// Then you implement `WorkItem` twice:
impl WorkItem<0> for MyDriver { type Pointer = Arc<Self>;
fn run(this: Self::Pointer) { println!("doing the main work here"); } }
impl WorkItem<1> for MyDriver { type Pointer = Arc<Self>;
fn run(this: Self::Pointer) { println!("doing the aux work here"); } }
// And finally you can call `enqueue` on a `Queue`:
let my_driver = Arc::new(MyDriver::new()); let queue: &'static Queue = kernel::workqueue::system_highpri(); queue.enqueue::<_, 0>(my_driver.clone()).expect("my_driver is not yet enqueued for id 0");
// there are different queues let queue = kernel::workqueue::system_long(); queue.enqueue::<_, 1>(my_driver.clone()).expect("my_driver is not yet enqueued for id 1");
// cannot insert multiple times: assert!(queue.enqueue::<_, 1>(my_driver.clone()).is_err());

FRTs could be used instead of this id, making the definition be Work<F: Field> (also merging the T parameter).

struct MyDriver {
    data: Arc<MyData>,
    main_work: Work<field_of!(Self, main_work)>,
    aux_work: Work<field_of!(Self, aux_work)>,
    // more fields ...
}
impl WorkItem<field_of!(MyDriver, main_work)> for MyDriver { type Pointer = Arc<Self>;
fn run(this: Self::Pointer) { println!("doing the main work here"); } }
impl WorkItem<field_of!(MyDriver, aux_work)> for MyDriver { type Pointer = Arc<Self>;
fn run(this: Self::Pointer) { println!("doing the aux work here"); } }
let my_driver = Arc::new(MyDriver::new()); let queue: &'static Queue = kernel::workqueue::system_highpri(); queue .enqueue(my_driver.clone(), field_of!(MyDriver, main_work)) // ^ using Gary's idea to avoid turbofish .expect("my_driver is not yet enqueued for main_work");
let queue = kernel::workqueue::system_long(); queue .enqueue(my_driver.clone(), field_of!(MyDriver, aux_work)) .expect("my_driver is not yet enqueued for aux_work");
assert!(queue.enqueue(my_driver.clone(), field_of!(MyDriver, aux_work)).is_err());

This makes it overall a lot more readable (by providing sensible names instead of magic numbers), and maintainable (we can add a new variant without worrying about which IDs are unused). It also avoids the unsafe HasWork trait and the need to write the impl_has_work! macro for each Work field.

I still think that having FRTs is going to be the right call for field projections as well, so I'm going to keep their experiment going. However, we should fully explore their necessity and rationale for a future RFC.

Comment by @BennoLossin posted on 2025-10-23:
Making Project::project safe

In the current proposal the Project::project function is unsafe, because it takes a raw pointer as an argument. This is pretty unusual for an operator trait (it would be the first). Tyler Mandry thought about a way of making it safe by introducing "partial struct types". This new type is spelled Struct.F where F is an FRT of that struct. It's like Struct, but with the restriction that only the field represented by F can be accessed. So for example &Struct.F would point to Struct, but only allow one to read that single field. This way we could design the Project trait in a safe manner:

// governs conversion of `Self` to `Narrowed<F>` & replaces Projectable
pub unsafe trait NarrowPointee {
    type Target;
type Narrowed<F: Field<Base = Self::Target>>; }
pub trait Project: NarrowPointee { type Output<F: Field<Base = Self::Type>>;
fn project(narrowed: Self::Narrowed<F>) -> Self::Output<F>; }

The NarrowPointee trait allows a type to declare that it supports conversions of its Target type to Target.F. For example, we would implement it for RefMut like this:

unsafe impl<'a, T> NarrowPointee for RefMut<'a, T> {
    type Target = T;
    type Narrowed<F: Field<Base = T>> = RefMut<'a, T.F>;
}

Then we can make the narrowing a builtin operation in the compiler that gets prepended on the actual coercion operation.

However, this "partial struct type" has a fatal flaw that Oliver Scherer found (edit by oli: it was actually boxy who found it): it conflicts with mem::swap, if Struct.F has the same layout as Struct, then writing to such a variable will overwrite all bytes, thus also overwriting field that aren't F. Even if we make an exception for these types and moves/copies, this wouldn't work, as a user today can rely on the fact that they write size_of::<T>() bytes to a *mut T and thus have a valid value of that type at that location. Tyler Mandry suggested we make it !Sized and even !MetaSized to prevent overwriting values of that type (maybe the Overwrite trait could come in handy here as well). But this might make "partial struct types" too weak to be truly useful. Additionally this poses many more questions that we haven't yet tackled.

Progress
Point of contact

Aapo Alasuutari

Champions

compiler (Oliver Scherer), lang (Tyler Mandry)

Task owners

Aapo Alasuutari

1 detailed update available.

Comment by @aapoalas posted on 2025-10-22:

Initial implementation of a Reborrow trait for types with only lifetimes with exclusive reference semantics is working but not yet upstreamed not in review. CoerceShared implementation is not yet started.

Proper composable implementation will likely require a different tactic than the current one. Safety and validity checks are currently absent as well and will require more work.

"Flexible, fast(er) compilation"
Progress
Point of contact

David Wood

Champions

cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)

Task owners

Adam Gemmell, David Wood

1 detailed update available.

Comment by @davidtwco posted on 2025-10-31:

We've now opened our first batch of RFCs: rust-lang/rfcs#3873, rust-lang/rfcs#3874 and rust-lang/rfcs#3875

Production-ready cranelift backend (rust-lang/rust-project-goals#397)
Progress
Point of contact

Folkert de Vries

Champions

compiler (bjorn3)

Task owners

bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

No detailed updates available.
Promoting Parallel Front End (rust-lang/rust-project-goals#121)
Progress
Point of contact

Sparrow Li

Task owners

Sparrow Li

No detailed updates available.
Relink don't Rebuild (rust-lang/rust-project-goals#400)
Progress
Point of contact

Jane Lusby

Champions

cargo (Weihang Lo), compiler (Oliver Scherer)

Task owners

Ally Sommers, Piotr Osiewicz

No detailed updates available.
"Higher-level Rust"
Progress
Point of contact

Niko Matsakis

Champions

compiler (Santiago Pastorino), lang (Niko Matsakis)

Task owners

Niko Matsakis, Santiago Pastorino

3 detailed updates available.

Comment by @nikomatsakis posted on 2025-10-07:

I posted this blog post that proposes that we ought to name the trait Handle and define it as a trait where clone produces an "entangled" value -- i.e., a second handle to the same underlying value.

Before that, there's been a LOT of conversation that hasn't made its way onto this tracking issue. Trying to fix that! Here is a brief summary, in any case:

RFC #3680: https://github.com/rust-lang/rfcs/pull/3680

Comment by @nikomatsakis posted on 2025-10-09:

I wrote up a brief summary of my current thoughts on Zulip; I plan to move this content into a series of blog posts, but I figured it was worth laying it out here too for those watching this space:

09:11 (1) I don't think clones/handles are categorically different when it comes to how much you want to see them made explicit; some applications want them both to be explicit, some want them automatic, some will want a mix -- and possibly other kinds of categorizations.

09:11 (2) But I do think that if you are making everything explicit, it's useful to see the difference between a general purpose clone and a handle.

09:12 (3) I also think there are many classes of software where there is value in having everything explicit -- and that those classes are often the ones most in Rust's "sweet spot". So we should make sure that it's possible to have everything be explicit ergonomically.

09:12 (4) This does not imply that we can't make automatic clones/handles possible too -- it is just that we should treat both use cases (explicit and automatic) as first-class in importance.

09:13 (5) Right now I'm focused on the explicit case. I think this is what the use-use-everywhere was about, though I prefer a different proposal now -- basically just making handle and clone methods understood and specially handled by the compiler for optimization and desugaring purposes. There are pros and cons to that, obviously, and that's what I plan to write-up in more detail.

09:14 (6) On a related note, I think we also need explicit closure captures, which is a whole interesting design space. I don't personally find it "sufficient" for the "fully explicit" case but I could understand why others might think it is, and it's probably a good step to take.

09:15 (7) I go back and forth on profiles -- basically a fancy name for lint-groups based on application domain -- and whether I think we should go that direction, but I think that if we were going to go automatic, that's the way I would do it: i.e., the compiler will automatically insert calls to clone and handle, but it will lint when it does so; the lint can by deny-by-default at first but applications could opt into allow for either or both.

I previously wanted allow-by-default but I've decided this is a silly hill to die on, and it's probably better to move in smaller increments.

Comment by @nikomatsakis posted on 2025-10-22:

Update:

There has been more discussion about the Handle trait on Zulip and elsewhere. Some of the notable comments:

  • Downsides of the current name: it's a noun, which doesn't follow Rust naming convention, and the verb handle is very generic and could mean many things.
  • Alternative names proposed: Entangle/entangle or entangled, Share/share, Alias/alias, or Retain/retain. if we want to seriously hardcore on the science names -- Mitose/mitose or Fission/fission.
  • There has been some criticism pointing out that focusing on handles means that other types which might be "cheaply cloneable" don't qualify.

For now I will go on using the term Handle, but I agree with the critique that it should be a verb, and currently prefer Alias/alias as an alternative.


I'm continuing to work my way through the backlog of blog posts about the conversations from Rustconf. The purposes of these blog posts is not just to socialize the ideas more broadly but also to help myself think through them. Here is the latest post:

https://smallcultfollowing.com/babysteps/blog/2025/10/13/ergonomic-explicit-handles/

The point of this post is to argue that, whatever else we do, Rust should have a way to create handles/clones (and closures that work with them) which is at once explicit and ergonomic.

To give a preview of my current thinking, I am working now on the next post which will discuss how we should add an explicit capture clause syntax. This is somewhat orthogonal but not really, in that an explicit syntax would make closures that clone more ergonomic (but only mildly). I don't have a proposal I fully like for this syntax though and there are a lot of interesting questions to work out. As a strawperson, though, you might imagine [this older proposal I wrote up](https://hackmd.io/Niko Matsakis/SyI0eMFXO?type=view), which would mean something like this:

let actor1 = async move(reply_tx.handle()) {
    reply_tx.send(...);
};
let actor2 = async move(reply_tx.handle()) {
    reply_tx.send(...);
};

This is an improvement on

let actor1 = {
    let reply_tx = reply_tx.handle();
    async move(reply_tx.handle()) {
        reply_tx.send(...);
    }
};

but only mildly.

The next post I intend to write would be a variant on "use, use everywhere" that recommends method call syntax and permitting the compiler to elide handle/clone calls, so that the example becomes

let actor1 = async move {
    reply_tx.handle().send(...);
    //       -------- due to optimizations, this would capture the handle creation to happen only when future is *created*
};

This would mean that cloning of strings and things might benefit from the same behavior:

let actor1 = async move {
    reply_tx.handle().send(some_id.clone());
    //                     -------- the `some_id.clone()` would occur at future creation time
};

The rationable that got me here is (a) minimizing perceived complexity and focusing on muscle memory (just add .clone() or .handle() to fix use-after-move errors, no matter when/where they occur). The cost of course is that (a) Handle/Clone become very special; and (b) it blurs the lines on when code execution occurs. Despite the .handle() occurring inside the future (resp. closure) body, it actually executes when the future (resp. closure) is created in this case (in other cases, such as a closure that implements Fn or FnMut and hence executes more than once, it might occur during each execution as well).

Stabilize cargo-script (rust-lang/rust-project-goals#119)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)

Task owners

Ed Page

No detailed updates available.
"Unblocking dormant traits"
Progress
Point of contact

Taylor Cramer

Champions

lang (Taylor Cramer), types (Oliver Scherer)

Task owners

Taylor Cramer, Taylor Cramer & others

No detailed updates available.
In-place initialization (rust-lang/rust-project-goals#395)
Progress
Point of contact

Alice Ryhl

Champions

lang (Taylor Cramer)

Task owners

Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

1 detailed update available.

Comment by @Darksonn posted on 2025-10-22:

This is our first update we’re posting for the in-place init work. Overall things are progressing well, with lively discussion happening on the newly minted t-lang/in-place-init Zulip channel. Here are the highlights since the lang team design meeting at the end of July:

Next-generation trait solver (rust-lang/rust-project-goals#113)
Progress
Point of contact

lcnr

Champions

types (lcnr)

Task owners

Boxy, Michael Goulet, lcnr

1 detailed update available.

Comment by @lcnr posted on 2025-10-23:

Since the last update we've fixed the hang in rayon in https://github.com/rust-lang/rust/pull/144991 and https://github.com/rust-lang/rust/pull/144732 which relied on https://github.com/rust-lang/rust/pull/143054 https://github.com/rust-lang/rust/pull/144955 https://github.com/rust-lang/rust/pull/144405 https://github.com/rust-lang/rust/pull/145706. This introduced some search graph bugs which we fixed in https://github.com/rust-lang/rust/pull/147061 https://github.com/rust-lang/rust/pull/147266.

We're mostly done with the opaque type support now. Doing so required a lot of quite involved changes:

  • https://github.com/rust-lang/rust/pull/145244 non-defining uses in borrowck
  • https://github.com/rust-lang/rust/pull/145925 non-defining uses in borrowck closure support
  • https://github.com/rust-lang/rust/pull/145711 non-defining uses in hir typeck
  • https://github.com/rust-lang/rust/pull/140375 eagerly compute sub_unification_table again
  • https://github.com/rust-lang/rust/pull/146329 item bounds
  • https://github.com/rust-lang/rust/pull/145993 function calls
  • https://github.com/rust-lang/rust/pull/146885 method selection
  • https://github.com/rust-lang/rust/pull/147249 fallback

We also fixed some additional self-contained issues and perf improvements: https://github.com/rust-lang/rust/pull/146725 https://github.com/rust-lang/rust/pull/147138 https://github.com/rust-lang/rust/pull/147152 https://github.com/rust-lang/rust/pull/145713 https://github.com/rust-lang/rust/pull/145951

We have also migrated rust-analyzer to entirely use the new solver instead of chalk. This required a large effort mainly by Jack Huey Chayim Refael Friedman and Shoyu Vanilla. That's some really impressive work on their end 🎉 See this list of merged PRs for an overview of what this required on the r-a side. Chayim Refael Friedman also landed some changes to the trait solver itself to simplify the integration: https://github.com/rust-lang/rust/pull/145377 https://github.com/rust-lang/rust/pull/146111 https://github.com/rust-lang/rust/pull/147723 https://github.com/rust-lang/rust/pull/146182.

We're still tracking the remaining issues in https://github.com/orgs/rust-lang/projects/61/views/1. Most of these issues are comparatively simple and I expect us to fix most of them over the next few months, getting us close to stabilization. We're currently doing another crater triage which may surface a few more issues.

Stabilizable Polonius support on nightly (rust-lang/rust-project-goals#118)
Progress
Point of contact

Rémy Rakic

Champions

types (Jack Huey)

Task owners

Amanda Stjerna, Rémy Rakic, Niko Matsakis

1 detailed update available.

Comment by @lqd posted on 2025-10-22:

Here's another summary of the most interesting developments since the last update:

  • reviews and updates have been done on the polonius alpha, and it has since landed
  • the last 2 trivial diagnostics failures were fixed
  • we've done perf runs, crater runs, completed gathering stats on crates.io for avg and outliers in CFG sizes, locals, loan and region counts, dataflow framework behavior on unexpected graph shapes and bitset invalidations
  • I worked on dataflow for borrowck: single pass analyses on acyclic CFGs, dataflow analyses on SCCs for cyclic CFGs
  • some more pieces of amanda's SCC rework have landed, with lcnr's help
  • lcnr's opaque type rework, borrowcking of nested items, and so on, also fixed some issues we mentioned in previous updates with member constraints for computing when loans are going out of scope
  • we also studied recent papers in flow-sensitive pointer analysis
  • I also started the loans-in-scope algorithm rework, and also have reachability acceleration with the CFG SCCs
  • the last 2 actual failures in the UI tests are soundness issues, related to liveness of captured regions for opaque types: some regions that should be live are not, which were done to help with precise capture and limit the impact of capturing unused regions that cannot be actually used in the hidden type. The unsoundness should not be observable with NLLs, but polonius alpha relies on liveness to propagate loans throughout the CFG: these dead regions prevent detecting some error-causing loan invalidations. The easiest fix would cause breakage in code that's now accepted. niko, jack and I have another possible solution and I'm trying to implement it now

Goals looking for help

Other goal updates

Borrow checking in a-mir-formality (rust-lang/rust-project-goals#122)
Progress
Point of contact

Niko Matsakis

Champions

types (Niko Matsakis)

Task owners

Niko Matsakis, tiif

No detailed updates available.
C++/Rust Interop Problem Space Mapping (rust-lang/rust-project-goals#388)
Progress
Point of contact

Jon Bauman

Champions

compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)

Task owners

Jon Bauman

No detailed updates available.
Comprehensive niche checks for Rust (rust-lang/rust-project-goals#262)
Progress
Point of contact

Bastian Kersting

Champions

compiler (Ben Kimock), opsem (Ben Kimock)

Task owners

Bastian Kersting], Jakob Koschel

No detailed updates available.
Progress
Point of contact

Boxy

Champions

lang (Niko Matsakis)

Task owners

Boxy, Noah Lev

1 detailed update available.

Comment by @nikomatsakis posted on 2025-10-22:

We had a design meeting on 2025-09-10, minutes available here, aiming at these questions:

There are a few concrete things I would like to get out of this meeting, listed sequentially in order of most to least important:

  1. Would you be comfortable stabilizing the initial ADTs-only extensions?
    • This would be properly RFC'd before stabilization, this ask is just a "vibe check".
  2. Are you interested in seeing Per-Value Rejection for enums with undesirable variants?
  3. How do you feel about the idea of Lossy Conversion as an approach in general, what about specifically for the References and Raw Pointers extensions?
  4. How do you feel about the idea of dropping the One Equality ideal in general, what about specifically for -0.0 vs +0.0, what about specifically for NaN values?

The vibe checks on the first one were as follows:

Vibe check

The main ask:

Would you be comfortable stabilizing the initial ADTs-only extensions?

(plus the other ones)

nikomatsakis

I am +1 on working incrementally and focusing first on ADTs. I am supportive of stabilization overall but I don't feel like we've "nailed" the way to talk or think about these things. So I guess my "vibe" is +1 but if this doc were turned into an RFC kind of "as is" I would probably wind up -1 on the RFC, I think more work is needed (in some sense, the question is, "what is the name of the opt-in trait and why is it named that"). This space is complex and I think we have to do better at helping people understand the fine-grained distinctions between runtime values, const-eval values, and type-safe values.

Niko: if we add some sort of derive of a trait name, how much value are we getting from the derive, what should the trait be named?

tmandry

I think we'll learn the most by stabilizing ADTs in a forward compatible way (including an opt-in) now. So +1 from me on the proposed design.

It's worth noting that this is a feature that interacts with many other features, and we will be considering extensions to the MVP for the foreseeable future. To some extent the lang team has committed to this already but we should know what we're signing ourselves up for.

scottmcm

scottmcm: concern over the private fields restriction (see question below), but otherwise for the top ask, yes happy to just do "simple" types (no floats, no cells, no references, etc).

TC

As Niko said, +1 on working incrementally, and I too am supportive overall.

As a vibe, per-value rejection seems fairly OK to me in that we decided to do value-based reasoning for other const checks. It occurs to me there's some parallel with that.

https://github.com/rust-lang/rust/pull/119044

As for the opt-in on types, I see the logic. I do have reservations about adding too many opt-ins to the language, and so I'm curious about whether this can be safely removed.

Regarding floats, I see the question on these as related to our decision about how to handle padding in structs. If it makes sense to normalize or otherwise treat -0.0 and +0.0 as the same, then it'd also make sense in my view to normalize or otherwise treat two structs with the same values but different padding (or where only one has initialized padding) as the same.

Continue resolving `cargo-semver-checks` blockers for merging into cargo (rust-lang/rust-project-goals#104)
Progress
Point of contact

Predrag Gruevski

Champions

cargo (Ed Page), rustdoc (Alona Enraght-Moony)

Task owners

Predrag Gruevski

No detailed updates available.
Develop the capabilities to keep the FLS up to date (rust-lang/rust-project-goals#391)
Progress
Point of contact

Pete LeVasseur

Champions

bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)

Task owners

Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

2 detailed updates available.

Comment by @nikomatsakis posted on 2025-10-22:

After much discussion, we have decided to charter this team as a t-spec subteam. Pete LeVasseur and I are working to make that happen now.

Comment by @nikomatsakis posted on 2025-10-22:

PR with charters:

https://github.com/rust-lang/team/pull/2028

Emit Retags in Codegen (rust-lang/rust-project-goals#392)
Progress
Point of contact

Ian McCormack

Champions

compiler (Ralf Jung), opsem (Ralf Jung)

Task owners

Ian McCormack

1 detailed update available.

Comment by @icmccorm posted on 2025-10-25:

Here's our first status update!

  • We've been experimenting with a few different ways of emitting retags in codegen, as well as a few different forms that retags should take at this level. We think we've settled on a set of changes that's worth sending out to the community for feedback, likely as a pre-RFC. You can expect more engagement from us on this level in the next couple of weeks.

  • We've used these changes to create an initial working prototype for BorrowSanitizer that supports finding Tree Borrows violations in tiny, single-threaded Rust programs. We're working on getting Miri's test suite ported over to confirm that everything is working correctly and that we've quashed any false positives or false negatives.

  • This coming Monday, I'll be presenting on BorrowSanitizer and this project goal at the Workshop on Supporting Memory Safety in LLVM. Please reach out if you're attending and would like to chat more in person!

Expand the Rust Reference to specify more aspects of the Rust language (rust-lang/rust-project-goals#394)
Progress
Point of contact

Josh Triplett

Champions

lang-docs (Josh Triplett), spec (Josh Triplett)

Task owners

Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

1 detailed update available.

Comment by @joshtriplett posted on 2025-10-22:

The work on this goal has led to many ongoing discussions on the current status of the Reference. Those discussions are still in progress.

Meanwhile, many people working on this goal have successfully written outlines or draft chapters, at various stages of completeness. There's a broken-out status report at https://github.com/rust-lang/project-goal-reference-expansion/issues/11 .

Finish the libtest json output experiment (rust-lang/rust-project-goals#255)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page)

Task owners

Ed Page

No detailed updates available.
Finish the std::offload module (rust-lang/rust-project-goals#109)
Progress
Point of contact

Manuel Drehwald

Champions

compiler (Manuel Drehwald), lang (TC)

Task owners

Manuel Drehwald, LLVM offload/GPU contributors

1 detailed update available.

Comment by @ZuseZ4 posted on 2025-10-22:

A longer update of the changes over the fall. We had two gsoc contributors and a lot of smaller improvements for std::autodiff. The first two improvements were already mentioned as draft PRs in the previous update, but got merged since. I also upstreamed more std::offload changes.

  1. Marcelo Domínguez refactored the autodiff frontend to be a proper rustc intrinsic, rather than just hackend into the frontend like I first implemented it. This already solved multiple open issues, reduced the code size, and made it generally easier to maintain going forward.
  2. Karan Janthe upstreamed a first implementation of "TypeTrees", which lowers rust type and layout information to Enzyme, our autodiff backend. This makes it more likely that you won't see compilation failures with the error message "Can not deduce type of ". We might refine in the future what information exactly we lower.
  3. Karan Janthe made sure that std::autodiff has support for f16 and and f128 types.
  4. One more of my offload PRs landed. I also figured out why the LLVM-IR generated by the std::offload code needed some manual adjustments in the past. We were inconsistent when communicating with LLVM's offload module, about whether we'd want a magic, extra, dyn_ptr argument, that enables kernels to use some extra features. We don't use these features yet, but for consistency we now always generate and expect the extra pointer. The bugfix is currently under review, once it lands upstream, rustc is able to run code on GPUs (still with a little help of clang).
  5. Marcelo Domínguez refactored my offload frontend, again introducing a proper rustc intrinsic. That code will still need to go through review, but once it lands it will get us a lot closer to a usable frontend. He also started to generate type information for our offload backend to know how many bytes to copy to and from the devices. This is a very simplified version of our autodiff typetrees.
  6. At RustChinaConf, I was lucky to run into the wild linker author David Lattimore, which helped me to create a draft PR that can dlopen Enzyme at runtime. This means we could ship it via rustup for people interested in std::autodiff, and don't have to link it in at build time, which would increase binary size even for those users that are not interested in it. There are some open issues, so please reach out if you have time to get the PR ready!
  7. @sgasho spend a lot of time trying to get Rust into the Enzyme CI. Unfortunately that is a tricky process due to Enzyme's CI requirements, so it's not merged yet.
  8. I tried to simplify building std::autodiff by marking it as compatible with download-llvm-ci. Building LLVM from source was previously the by far slowest part of building rustc with autodiff, so this has a large potential. Unfortunately the CI experiments revealed some issues around this setting. We think we know why Enzyme's Cmake causes issues here and are working on a fix to make it more reliable.
  9. Osama Abdelkader and bjorn3 looked into automatically enabling fat-lto when autodiff is enabled. In the past, forgetting to enable fat-lto resulted in incorrect (zero) derivatives. The first approach unfortunately wasn't able to cover all cases, so we need to see whether we can handle it nicely. If that turns out to be too complicated, we will revert it and instead "just" provide a nice error message, rather than returning incorrect derivatives.

All-in-all I spend a lot more time on infra (dlopen, cmake, download-llvm-ci, ...) then I'd like, but on the happy side there are only so many features left that I want to support here so there is an end in sight. I am also about to give a tech-talk at the upcoming LLVM dev meeting about safe GPU programming in Rust.

Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust-project-goals#407)
Progress
Point of contact

Tomas Sedovic

Champions

compiler (Wesley Wiser)

Task owners

(depending on the flag)

3 detailed updates available.

Comment by @tomassedovic posted on 2025-10-09:

I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!).

Comment by @tomassedovic posted on 2025-10-10:
  • [merged] Sanitizers target modificators / https://github.com/rust-lang/rust/pull/138736
  • [merged] Add assembly test for -Zreg-struct-return option / https://github.com/rust-lang/rust/pull/145382
  • [merged] CI: rfl: move job forward to Linux v6.17-rc5 to remove temporary commits / https://github.com/rust-lang/rust/pull/146368
  • -Zharden-sls / https://github.com/rust-lang/rust/pull/136597
    • Waiting on review
  • #![register_tool] / https://github.com/rust-lang/rust/issues/66079
    • Waiting on https://github.com/rust-lang/rfcs/pull/3808
  • -Zno-jump-tables / https://github.com/rust-lang/rust/pull/145974
    • Active FCP, waiting on 2 check boxes
Comment by @tomassedovic posted on 2025-10-24:
-Cunsigned-char

We've discussed adding an option analogous to -funsigned-char in GCC and Clang, that would allow you to set whether std::ffi::c_char is represented by i8 or u8. Right now, this is platform-specific and should map onto whatever char is in C on the same platform. However, Linux explicitly sets char to be unsigned and then our Rust code conflicts with that. And isn this case the sign is significant.

Rust for Linux works around this this with their rust::ffi module, but now that they've switched to the standard library's CStr type, they're running into it again with the as_ptr method.

Tyler mentioned https://docs.rs/ffi_11/latest/ffi_11/ which preserves the char / signed char / unsigned char distinction.

Grouping target modifier flags

The proposed unsigned-char option is essentially a target modifier. We have several more of these (e.g. llvm-args, no-redzone) in the Rust compiler and Josh suggested we distinguish them somehow. E.g. by giving them the same prefix or possibly creating a new config option (right now we have -C and -Z, maybe we could add -T for target modifiers) so they're distinct from the e.g. the codegen options.

Josh started a Zulip thread here: https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Grouping.20target.20modifier.20options.3F/with/546524232

#![register_tool] / rust#66079 / RFC#3808

Tyler looked at the RFC. The Crubit team started using register_tool but then moved to using an attribute instead. He proposed we could do something similar here, although it would require a new feature and RFC.

The team was open to seeing how it would work.

Getting Rust for Linux into stable Rust: language features (rust-lang/rust-project-goals#116)
Progress
Point of contact

Tomas Sedovic

Champions

lang (Josh Triplett), lang-docs (TC)

Task owners

Ding Xiang Fei

3 detailed updates available.

Comment by @tomassedovic posted on 2025-10-09:

I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!).

Comment by @tomassedovic posted on 2025-10-10:
Deref/Receiver
  • Ding Xiang Fei keeps updating the PR: https://github.com/rust-lang/rust/pull/146095
  • They're also working on a document to explain the consequences of this split
Arbitrary Self Types
  • https://github.com/rust-lang/rust/issues/44874
  • Waiting on the Deref/Receiver work, no updates
derive(CoercePointee)
  • https://github.com/rust-lang/rust/pull/133820
  • Waiting on Arbitrary self types
Pass pointers to const in asm! blocks
  • RFC: https://github.com/rust-lang/rfcs/pull/3848
  • The Lang team went through the RFC with Alice Ryhl on 2025-10-08 and it's in FCP now
Field projections
  • Benno Lossin opened a PR here: https://github.com/rust-lang/rust/pull/146307
  • Being reviewed by the compiler folks
Providing \0 terminated file names with #[track_caller]
  • The feature has been implemented and stabilized with file_as_c_str as the method name: https://github.com/rust-lang/rust/pull/145664
Supertrait auto impl RFC
  • Ding Xiang Fei opened the RFC and works with the reviewers: https://github.com/rust-lang/rfcs/pull/3851
Other
  • Miguel Ojeda spoke to Linus about rustfmt and they came to agreement.
Comment by @tomassedovic posted on 2025-10-24:
Layout of core::any::TypeId

Danilo asked about the layout of TypeId -- specifically its size and whether they can rely on it because they want to store it in a C struct. The struct's size is currently 16 bytes, but that's an implementation detail.

As a vibe check, Josh Triplett and Tyler Mandry were open to guaranteeing that it's going to be at most 16 bytes, but they wanted to reserve the option to reduce the size at some point. The next step is to have the full Lang and Libs teams discuss the proposal.

Danilo will open a PR to get that discussion started.

rustfmt

Miguel brought up the "trailing empty comment" workaround for the formatting issue that made the rounds on the Linux kernel a few weeks ago. The kernel style places each import on a single line:

    use crate::{
        fmt,
        page::AsPageIter,
    };

rustfmt compresses this to:

    use crate::{fmt, page::AsPageIter};

The workaround is to put an empty trailing comment at the end

    use crate::{
        fmt,
        page::AsPageIter, //
    };

This was deemed acceptable (for the time being) and merged into the mainline kernel: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4a9cb2eecc78fa9d388481762dd798fa770e1971

Miguel is in contact with rustfmt to support this behaviour without a workaround.

// PANIC: ... comments / clippy#15895

This is a proposal to add a lint that would require a PANIC comment (modeled after the SAFETY comment) to explain the circumstances during which the code will or won't panic.

Alejandra González was open to the suggestion and Henry Barker stepped up to implement it.

Deref/Receiver

During the experimentation work, Ding ran into an issue with overlapping impls (that was present even with #[unstable_feature_bound(..)]). We ran out of time but we'll discuss this offline and return to it at the next meeting.

Implement Open API Namespace Support (rust-lang/rust-project-goals#256)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)

Task owners

b-naber, Ed Page

No detailed updates available.
MIR move elimination (rust-lang/rust-project-goals#396)
Progress
Point of contact

Amanieu d'Antras

Champions

lang (Amanieu d'Antras)

Task owners

Amanieu d'Antras

No detailed updates available.
Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project-goals#264)
Progress
Point of contact

Help Wanted

Task owners

Help wanted, Ed Page

No detailed updates available.
Prototype Cargo build analysis (rust-lang/rust-project-goals#398)
Progress
Point of contact

Weihang Lo

Champions

cargo (Weihang Lo)

Task owners

Help wanted Weihang Lo, Weihang Lo

1 detailed update available.

Comment by @weihanglo posted on 2025-10-04:

Cargo tracking issue: https://github.com/rust-lang/cargo/issues/15844. The first implementation was https://github.com/rust-lang/cargo/pull/15845 in August that added build.analysis.enabled = true to unconditionally generate timing HTML. Further implementations tasks is listed in https://github.com/rust-lang/cargo/issues/15844#issuecomment-3192779748.

Haven't yet got any progress in September.

reflection and comptime (rust-lang/rust-project-goals#406)
Progress
Point of contact

Oliver Scherer

Champions

compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)

Task owners

oli-obk

1 detailed update available.

Comment by @oli-obk posted on 2025-10-22:

I implemented an initial MVP supporting only tuples and primitives (tho those are just opaque things you can't interact with further), and getting offsets for the tuple fields as well as the size of the tuple: https://github.com/rust-lang/rust/pull/146923

There are two designs of how to expose this from a libs perspective, but after a sync meeting with scottmcm yesterday we came to the conclusion that neither is objectively better at this stage so we're just going to go with the nice end-user UX version for now. For details see the PR description.

Once the MVP lands, I will mentor various interested contributors who will keep adding fields to the Type struct and variants the TypeKind enum.

The next major step is restricting what information you can get from structs outside of the current module or crate. We want to honor visibility, so an initial step would be to just never show private fields, but we want to explore allowing private fields to be shown either just within the current module or via some opt-in marker trait

Rework Cargo Build Dir Layout (rust-lang/rust-project-goals#401)
Progress
Point of contact

Ross Sullivan

Champions

cargo (Weihang Lo)

Task owners

Ross Sullivan

1 detailed update available.

Comment by @ranger-ross posted on 2025-10-06:
Status update October 6, 2025

The build-dir was split out of target-dir as part of https://github.com/rust-lang/cargo/issues/14125 and scheduled for stabilization in Rust 1.91.0. 🎉

Before re-organizing the build-dir layout we wanted to improve the existing layout tests to make sure we do not make any unexpected changes. This testing harness improvement was merged in https://github.com/rust-lang/cargo/pull/15874.

The initial build-dir layout reorganization PR has been posted https://github.com/rust-lang/cargo/pull/15947 and discussion/reviews are under way.

Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project-goals#402)
Progress
Point of contact

Guillaume Gomez

Champions

compiler (Wesley Wiser), infra (Marco Ieni)

Task owners

Guillaume Gomez

No detailed updates available.
Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust-lang/rust-project-goals#403)
Progress
Point of contact

Jakob Koschel

Task owners

[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

No detailed updates available.
Rust Vision Document (rust-lang/rust-project-goals#269)
Progress
Point of contact

Niko Matsakis

Task owners

vision team

1 detailed update available.

Comment by @jackh726 posted on 2025-10-22:

Update:

Niko and I gave a talk at RustConf 2025 (and I represented that talk at RustChinaConf 2025) where we gave an update on this (and some intermediate insights).

We have started to seriously plan the shape of the final doc. We have some "blind spots" that we'd like to cover before finishing up, but overall we're feeling close to the finish line on interviews.

rustc-perf improvements (rust-lang/rust-project-goals#275)
Progress
Point of contact

James

Champions

compiler (David Wood), infra (Jakub Beránek)

Task owners

James, Jakub Beránek, David Wood

1 detailed update available.

Comment by @Kobzol posted on 2025-10-21:

We moved forward with the implementation, and the new job queue system is now being tested in production on a single test pull request. Most things seem to be working, but there are a few things to iron out and some profiling to be done. I expect that within a few weeks we could be ready to switch to the new system fully in production.

Stabilize public/private dependencies (rust-lang/rust-project-goals#272)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page)

Task owners

Help wanted, Ed Page

No detailed updates available.
Stabilize rustdoc `doc_cfg` feature (rust-lang/rust-project-goals#404)
Progress
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

Task owners

Guillaume Gomez

No detailed updates available.
SVE and SME on AArch64 (rust-lang/rust-project-goals#270)
Progress
Point of contact

David Wood

Champions

compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)

Task owners

David Wood

1 detailed update available.

Comment by @nikomatsakis posted on 2025-10-22:

Sized hierarchy

The focus right now is on the "non-const" parts of the proposal, as the "const" parts are blocked on the new trait solver (https://github.com/rust-lang/rust-project-goals/issues/113). Now that the types team FCP https://github.com/rust-lang/rust/pull/144064 has completed, work can proceed to land the implementation PRs. David Wood plans to split the RFC to separate out the "non-const" parts of the proposal so it can move independently, which will enable extern types.

To that end, there are three interesting T-lang design questions to be considered.

Naming of the traits

The RFC currently proposes the following names

  • Sized
  • MetaSized
  • PointeeSized

However, these names do not follow the "best practice" of naming the trait after the capability that it provides. As champion Niko is recommending we shift to the following names:

  • Sized -- should righly be called SizeOf, but oh well, not worth changing.
  • SizeOfVal -- named after the method size_of_val that you get access to.
  • Pointee -- the only thing you can do is point at it.

The last trait name is already used by the (unstable) std::ptr::Pointee trait. We do not want to have these literally be the same trait because that trait adds a Metadata associated type which would be backwards incompatible; if existing code uses T::Metadata to mean <T as SomeOtherTrait>::Metadata, it could introduce ambiguity if now T: Pointee due to defaults. My proposal is to rename std::ptr::Pointee to std::ptr::PointeeMetadata for now, since that trait is unstable and the design remains under some discussion. The two traits could either be merged eventually or remain separate.

Note that PointeeMetadata would be implemented automatically by the compiler for anything that implements Pointee.

Syntax opt-in

The RFC proposes that an explicit bound like T: MetaSized disabled the default T: Sized bound. However, this gives no signal that this trait bound is "special" or different than any other trait bound. Naming conventions can help here, signalling to users that these are special traits, but that leads to constraints on naming and may not scale as we consider using this mechanism to relax other defaults as proposed in my recent blog post. One idea is to use some form of syntax, so that T: MetaSized is just a regular bound, but (for example) T: =MetaSized indicates that this bound "disables" the default Sized bound. This gives users some signal that something special is going on. This = syntax is borrowing from semver constraints, although it's not a precise match (it does not mean that T: Sized doesn't hold, after all). Other proposals would be some other sigil (T: ?MetaSized, but it means "opt out from the traits above you"; T: #MetaSized, ...) or a keyword (no idea).

To help us get a feel for it, I'll use T: =Foo throughout this post.

Implicit trait supertrait bounds, edition interaction

In Rust 2024, a trait is implicitly ?Sized which gets mapped to =SizeOfVal:

trait Marker {} // cannot be implemented by extern types

This is not desirable but changing it would be backwards incompatible if traits have default methods that take advantage of this bound:

trait NotQuiteMarker {
    fn dummy(&self) {
        let s = size_of_val(self);
    }
}

We need to decide how to handle this. Options are

  • Just change it, breakage will be small (have to test that).
  • Default to =SizeOfVal but let users explicitly write =Pointee if they want that. Bad because all traits will be incompatible with extern types.
  • Default to =SizeOfVal only if defaulted methods are present. Bad because it's a backwards incompatible change to add a defaulted method now.
  • Default to =Pointee but add where Self: =SizeOfVal implicitly to defaulted methods. Now it's not backwards incompatible to add a new defaulted method, but it is backwards incompatible to change an existing method to have a default.

If we go with one of the latter options, Niko proposes that we should relax this in the next Edition (Rust 2026?) so that the default becomes Pointee (or maybe not even that, if we can).

Relaxing associated type bounds

Under the RFC, existing ?Sized bounds would be equivalent to =SizeOfVal. This is mostly fine but will cause problems in (at least) two specific cases: closure bounds and the Deref trait. For closures, we can adjust the bound since the associated type is unstable and due to the peculiarities of our Fn() -> T syntax. Failure to adjust the Deref bound in particular would prohibit the use of Rc<E> where E is an extern type, etc.

For deref bounds, David Wood is preparing a PR that simply changes the bound in a backwards incompatible way to assess breakage on crater. There is some chance the breakage will be small.

If the breakage proves problematic, or if we find other traits that need to be relaxed in a similar fashion, we do have the option of:

  • In Rust 2024, T: Deref becomes equivalent to T: Deref<Target: SizeOfVal> unless written like T: Deref<Target: =Pointee>. We add that annotation throughout stdlib.
  • In Rust 202X, we change the default, so that T: Deref does not add any special bounds, and existing Rust 2024 T: Deref is rewritten to T: Deref<Target: SizeOfVal> as needed.

Other notes

One topic that came up in discussion is that we may eventually wish to add a level "below" Pointee, perhaps Value, that signifies webassembly external values which cannot be pointed at. That is not currently under consideration but should be backwards compatible.

Type System Documentation (rust-lang/rust-project-goals#405)
Progress
Point of contact

Boxy

Champions

types (Boxy)

Task owners

Boxy, lcnr

No detailed updates available.
Progress
Point of contact

Jack Wrenn

Champions

compiler (Jack Wrenn), lang (Scott McMurray)

Task owners

Jacob Pratt, Jack Wrenn, Luca Versari

No detailed updates available.

The Rust Programming Language BlogProject goals update — September 2025

The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

"Beyond the `&`"

Progress
Point of contact

Frank King

Champions

compiler (Oliver Scherer), lang (TC)

Task owners

Frank King

No detailed updates available.
Design a language feature to solve Field Projections (rust-lang/rust-project-goals#390)
Progress
Point of contact

Benno Lossin

Champions

lang (Tyler Mandry)

Task owners

Benno Lossin

1 detailed update available.

Comment by @BennoLossin posted on 2025-09-24:

Key Developments

  • coordinating with #![feature(pin_ergonomics)] (https://github.com/rust-lang/rust/issues/130494) to ensure compatibility between the two features (allow custom pin projections to be the same as the ones for &pin mut T)
  • identified connection to auto reborrowing
    • https://github.com/rust-lang/rust-project-goals/issues/399
    • https://github.com/rust-lang/rust/issues/145612
  • held a design meeting
    • very positive feedback from the language team
    • approved lang experiment
    • got a vibe check on design axioms
  • created a new Zulip channel #t-lang/custom-refs for all new features needed to make custom references more similar to &T/&mut T such as field projections, auto reborrowing and more
  • created the tracking issue for #![feature(field_projections)]
  • opened https://github.com/rust-lang/rust/pull/146307 to implement field representing types (FRTs) in the compiler

Next Steps

  • Get https://github.com/rust-lang/rust/pull/146307 reviewed & merged

Help Wanted

  • When the PR for FRTs lands, try out the feature & provide feedback on FRTs
  • if possible using the field-projection crate and provide feedback on projections

Internal Design Updates

Shared & Exclusive Projections

We want users to be able to have two different types of projections analogous to &T and &mut T. Each field can be projected independently and a single field can only be projected multiple times in a shared way. The current design uses two different traits to model this. The two traits are almost identical, except for their safety documentation.

We were thinking if it is possible to unify them into a single trait and have coercions similar to autoreborrowing that would allow the borrow checker to change the behavior depending on which type is projected.

Syntax

There are lots of different possibilities for which syntax we can choose, here are a couple options: [Devon Peticolas][]->f/[Andrea D'Angelo][] x->f, [Devon Peticolas][].f/[Andrea D'Angelo][] x.f, x.[Fatih Kadir Akın][]/x.mut[Fatih Kadir Akın][], x.ref.[Fatih Kadir Akın][]/x.[Fatih Kadir Akın][]. Also many alternatives for the sigils used: x[Fatih Kadir Akın][], x~f, x.@.f.

We have yet to decide on a direction we want to go in. If we are able to merge the two project traits, we can also settle on a single syntax which would be great.

Splitting Projections into Containers & Pointers

There are two categories of projections: Containers and Pointers:

  • Containers are types like MaybeUninit<T>, Cell<T>, UnsafeCell<T>, ManuallyDrop<T>. They are repr(transparent) and apply themselves to each field, so MaybeUninit<MyStruct> has a field of type MaybeUninit<MyField> (if MyStruct has a field of type MyField).
  • Pointers are types like &T, &mut T, cell::Ref[Mut]<'_, T>, *const T/*mut T, NonNull<T>. They support projecting Pointer<'_, Struct> to Pointer<'_, Field>.

In the current design, these two classes of projections are unified by just implementing Pointer<'_, Container<Struct>> -> Pointer<'_, Container<Field>> manually for the common use-cases (for example &mut MaybeUninit<Struct> -> &mut MaybeUninit<Field>). However this means that things like &Cell<MaybeUninit<Struct>> doesn't have native projections unless we explicitly implement them.

We could try to go for a design that has two different ways to implement projections -- one for containers and one for pointers. But this has the following issues:

  • there are two ways to implement projections, which means that some people will get confused which one they should use.
  • making projections through multiple container types work out of the box is great, however this means that when defining a new container type and making it available for projections, one needs to consider all other container types and swear coherence with them. If we instead have an explicit way to opt in to projections through multiple container types, the implementer of that trait only has to reason about the types involved in that operation.
    • so to rephrase, the current design allows more container types that users actually use to be projected whereas the split design allows arbitrary nestings of container types to be projected while disallowing certain types to be considered container types.
  • The same problem exists for allowing all container types to be projected by pointer types, if I define a new pointer type I again need to reason about all container types and if it's sound to project them.

We might be able to come up with a sensible definition of "container type" which then resolves these issues, but further investigation is required.

Projections for &Custom<U>

We want to be able to have both a blanket impl<T, F: Field<Base = T>> Project<F> for &T as well as allow people to have custom projections on &Custom<U>. The motivating example for custom projections is the Rust-for-Linux Mutex that wants these projections for safe RCU abstractions.

During the design meeting, it was suggested we could add a generic to Project that only the compiler is allowed to insert, this would allow disambiguation between the two impls. We have now found an alternative approach that requires less specific compiler magic:

  • Add a new marker trait ProjectableBase that's implemented for all types by default.
  • People can opt out of implementing it by writing impl !ProjectableBase for MyStruct; (needs negative impls for marker traits).
  • We add where T: ProjectableBase to the impl Project for &T.
  • The compiler needs to consider the negative impls in the overlap check for users to be able to write their own impl<U, F> Project<F> for &Custom<U> where ... (needs negative impl overlap reasoning)

We probably want negative impls for marker traits as well as improved overlap reasoning for different reasons too, so it is probably fine to depend on them here.

enum support

enum and union shouldn't be available for projections by default, take for example &Cell<Enum>, if we project to a variant, someone else could overwrite the value with a different variant, invalidating our &Cell<Field>. This also needs a new trait, probably AlwaysActiveField (needs more name bikeshedding, but too early for that) that marks fields in structs and tuples.

To properly project an enum, we need:

  • a new CanProjectEnum (TBB) trait that provides a way to read the discriminant that's currently inhabiting the value.
    • it also needs to guarantee that the discriminant doesn't change while fields are being projected (this rules out implementing it for &Cell)
  • a new match operator that will project all mentioned fields (for &Enum this already is the behavior for match)

Field Representing Types (FRTs)

While implementing https://github.com/rust-lang/rust/pull/146307 we identified the following problems/design decisions:

  • a FRT is considered local to the orphan check when each container base type involved in the field path is local or a tuple (see the top comment on the PR for more infos)
  • FRTs cannot implement Drop
  • the Field trait is not user-implementable
  • types with fields that are dynamically sized don't have a statically known offset, which complicates the UnalignedField trait,

I decided to simplify the first implementation of FRTs and restrict them to sized structs and tuples. It also doesn't support packed structs. Future PRs will add support for enums, unions and packed structs as well as dynamically sized types.

Progress
Point of contact

Aapo Alasuutari

Champions

compiler (Oliver Scherer), lang (Tyler Mandry)

Task owners

Aapo Alasuutari

No detailed updates available.

"Flexible, fast(er) compilation"

Progress
Point of contact

David Wood

Champions

cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)

Task owners

Adam Gemmell, David Wood

1 detailed update available.

Comment by @adamgemmell posted on 2025-09-12:

Recently we've been working on feedback on the multi-staged format of the RFC. We've also shared the RFC outside of our sync call group to people from a variety of project teams and potential users too.

We're now receiving feedback that is much more detail-oriented, as opposed to being about the direction and scope of the RFC, which is a good indication that the overall strategy for shipping this RFC seems promising. We're continuing to address feedback to ensure the RFC is clear, consistent and technically feasible. David's feeling is that we've probably got another couple rounds of feedback from currently involved people and then we'll invite more people from various groups before publishing parts of the RFC formally.

Production-ready cranelift backend (rust-lang/rust-project-goals#397)
Progress
Point of contact

Folkert de Vries

Champions

compiler (bjorn3)

Task owners

bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

No detailed updates available.
Promoting Parallel Front End (rust-lang/rust-project-goals#121)
Progress
Point of contact

Sparrow Li

Task owners

Sparrow Li

Help wanted:

Help test the deadlock code in the issue list and try to reproduce the issue

1 detailed update available.

Comment by @SparrowLii posted on 2025-09-17:
  • Key developments: We have added more tests for deadlock issues. And we can say that deadlock problems are almost resolved. And we are currently addressing issues related to reproducible builds, and some of these have already been resolved.
  • Blockers: null
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue
Relink don't Rebuild (rust-lang/rust-project-goals#400)
Progress
Point of contact

Jane Lusby

Champions

cargo (Weihang Lo), compiler (Oliver Scherer)

Task owners

Ally Sommers, Piotr Osiewicz

No detailed updates available.

"Higher-level Rust"

Stabilize cargo-script (rust-lang/rust-project-goals#119)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-09-16:

Key developments:

  • Overall polish
    • https://github.com/rust-lang/rust/pull/145751
    • https://github.com/rust-lang/rust/pull/145754
    • https://github.com/rust-lang/rust/pull/146106
    • https://github.com/rust-lang/rust/pull/146137
    • https://github.com/rust-lang/rust/pull/146211
    • https://github.com/rust-lang/rust/pull/146340
    • https://github.com/rust-lang/rust/pull/145568
    • https://github.com/rust-lang/cargo/pull/15878
    • https://github.com/rust-lang/cargo/pull/15886
    • https://github.com/rust-lang/cargo/pull/15899
    • https://github.com/rust-lang/cargo/pull/15914
    • https://github.com/rust-lang/cargo/pull/15927
    • https://github.com/rust-lang/cargo/pull/15939
    • https://github.com/rust-lang/cargo/pull/15952
    • https://github.com/rust-lang/cargo/pull/15972
    • https://github.com/rust-lang/cargo/pull/15975
  • rustfmt work
    • https://github.com/rust-lang/rust/pull/145617
    • https://github.com/rust-lang/rust/pull/145766
  • Reference work
    • https://github.com/rust-lang/reference/pull/1974

"Unblocking dormant traits"

Progress
Point of contact

Taylor Cramer

Champions

lang (Taylor Cramer), types (Oliver Scherer)

Task owners

Taylor Cramer, Taylor Cramer & others

1 detailed update available.

Comment by @cramertj posted on 2025-09-30:

Current status: there is an RFC for auto impl supertraits that has received some discussion and updates (thank you, Ding Xiang Fei!).

The major open questions currently are:

Syntax

The current RFC proposes:

trait Subtrait: Supertrait {
    auto impl Supertrait {
        // Supertrait items defined in terms of Subtrait items, if any
    }
}

Additionally, there is an open question around the syntax of auto impl for unsafe supertraits. The current proposal is to require unsafe auto impl Supertrait.

Whether to require impls to opt-out of auto impls

The current RFC proposes that

impl Supertrait for MyType {}
impl Subtrait for MyType {
    // Required in order to manually write `Supertrait` for MyType.
    extern impl Supertrait;
}

This makes it explicit via opt-out whether an auto impl is being applied. However, this is in conflict with the goal of allowing auto impls to be added to existing trait hierarchies. The RFC proposes to resolve this via a temporary attribute which triggers a warning. See my comment here.

Note that properly resolving whether or not to apply an auto impl requires coherence-like analysis.

In-place initialization (rust-lang/rust-project-goals#395)
Progress
Point of contact

Alice Ryhl

Champions

lang (Taylor Cramer)

Task owners

Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

No detailed updates available.
Next-generation trait solver (rust-lang/rust-project-goals#113)
Progress
Point of contact

lcnr

Champions

types (lcnr)

Task owners

Boxy, Michael Goulet, lcnr

No detailed updates available.
Stabilizable Polonius support on nightly (rust-lang/rust-project-goals#118)
Progress
Point of contact

Rémy Rakic

Champions

types (Jack Huey)

Task owners

Amanda Stjerna, Rémy Rakic, Niko Matsakis

No detailed updates available.

Goals looking for help

No goals listed.

Other goal updates

Borrow checking in a-mir-formality (rust-lang/rust-project-goals#122)
Progress
Point of contact

Niko Matsakis

Champions

types (Niko Matsakis)

Task owners

Niko Matsakis, tiif

No detailed updates available.
C++/Rust Interop Problem Space Mapping (rust-lang/rust-project-goals#388)
Progress
Point of contact

Jon Bauman

Champions

compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)

Task owners

Jon Bauman

No detailed updates available.
Comprehensive niche checks for Rust (rust-lang/rust-project-goals#262)
Progress
Point of contact

Bastian Kersting

Champions

compiler (Ben Kimock), opsem (Ben Kimock)

Task owners

Bastian Kersting], Jakob Koschel

No detailed updates available.
Progress
Point of contact

Boxy

Champions

lang (Niko Matsakis)

Task owners

Boxy, Noah Lev

No detailed updates available.
Continue resolving `cargo-semver-checks` blockers for merging into cargo (rust-lang/rust-project-goals#104)
Progress
Point of contact

Predrag Gruevski

Champions

cargo (Ed Page), rustdoc (Alona Enraght-Moony)

Task owners

Predrag Gruevski

1 detailed update available.

Comment by @obi1kenobi posted on 2025-09-19:

Just removed the duplicate posts, guessing from a script that had a bad day.

Develop the capabilities to keep the FLS up to date (rust-lang/rust-project-goals#391)
Progress
Point of contact

Pete LeVasseur

Champions

bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)

Task owners

Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

No detailed updates available.
Emit Retags in Codegen (rust-lang/rust-project-goals#392)
Progress
Point of contact

Ian McCormack

Champions

compiler (Ralf Jung), opsem (Ralf Jung)

Task owners

Ian McCormack

No detailed updates available.
Expand the Rust Reference to specify more aspects of the Rust language (rust-lang/rust-project-goals#394)
Progress
Point of contact

Josh Triplett

Champions

lang-docs (Josh Triplett), spec (Josh Triplett)

Task owners

Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

No detailed updates available.
Finish the libtest json output experiment (rust-lang/rust-project-goals#255)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-09-16:

Key developments:

  • libtest2
    • libtest env variables were deprecated, reducing the API surface for custom test harnesses, https://github.com/rust-lang/rust/pull/145269
    • libtest2 was updated to reflect deprecations
    • https://github.com/assert-rs/libtest2/pull/105
    • libtest2 is now mostly in shape for use
  • json schema
    • https://github.com/assert-rs/libtest2/pull/107
    • https://github.com/assert-rs/libtest2/pull/108
    • https://github.com/assert-rs/libtest2/pull/111
    • https://github.com/assert-rs/libtest2/pull/120
    • starting exploration of extension through custom messages, see https://github.com/assert-rs/libtest2/pull/122

New areas found for further exploration

  • Failable discovery
  • Nested discovery
Finish the std::offload module (rust-lang/rust-project-goals#109)
Progress
Point of contact

Manuel Drehwald

Champions

compiler (Manuel Drehwald), lang (TC)

Task owners

Manuel Drehwald, LLVM offload/GPU contributors

No detailed updates available.
Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust-project-goals#407)
Progress
Point of contact

Tomas Sedovic

Champions

compiler (Wesley Wiser)

Task owners

(depending on the flag)

No detailed updates available.
Getting Rust for Linux into stable Rust: language features (rust-lang/rust-project-goals#116)
Progress
Point of contact

Tomas Sedovic

Champions

lang (Josh Triplett), lang-docs (TC)

Task owners

Ding Xiang Fei

No detailed updates available.
Implement Open API Namespace Support (rust-lang/rust-project-goals#256)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)

Task owners

b-naber, Ed Page

No detailed updates available.
MIR move elimination (rust-lang/rust-project-goals#396)
Progress
Point of contact

Amanieu d'Antras

Champions

lang (Amanieu d'Antras)

Task owners

Amanieu d'Antras

No detailed updates available.
Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project-goals#264)
Progress
Point of contact

Help Wanted

Task owners

Help wanted, Ed Page

1 detailed update available.

Comment by @epage posted on 2025-09-16:

Key developments:

  • https://github.com/crate-ci/cargo-plumbing/pull/53
  • https://github.com/crate-ci/cargo-plumbing/pull/62
  • https://github.com/crate-ci/cargo-plumbing/pull/68
  • https://github.com/crate-ci/cargo-plumbing/pull/96
  • Further schema discussions at https://github.com/crate-ci/cargo-plumbing/discussions/18
  • Writing up https://github.com/crate-ci/cargo-plumbing/issues/82

Major obstacles

  • Cargo, being designed for itself, doesn't allow working with arbitrary data, see https://github.com/crate-ci/cargo-plumbing/issues/82
Prototype Cargo build analysis (rust-lang/rust-project-goals#398)
Progress
Point of contact

Weihang Lo

Champions

cargo (Weihang Lo)

Task owners

Help wanted Weihang Lo, Weihang Lo

No detailed updates available.
reflection and comptime (rust-lang/rust-project-goals#406)
Progress
Point of contact

Oliver Scherer

Champions

compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)

Task owners

oli-obk

No detailed updates available.
Rework Cargo Build Dir Layout (rust-lang/rust-project-goals#401)
Progress
Point of contact

Ross Sullivan

Champions

cargo (Weihang Lo)

Task owners

Ross Sullivan

No detailed updates available.
Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project-goals#402)
Progress
Point of contact

Guillaume Gomez

Champions

compiler (Wesley Wiser), infra (Marco Ieni)

Task owners

Guillaume Gomez

No detailed updates available.
Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust-lang/rust-project-goals#403)
Progress
Point of contact

Jakob Koschel

Task owners

[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

No detailed updates available.
Rust Vision Document (rust-lang/rust-project-goals#269)
Progress
Point of contact

Niko Matsakis

Task owners

vision team

No detailed updates available.
rustc-perf improvements (rust-lang/rust-project-goals#275)
Progress
Point of contact

James

Champions

compiler (David Wood), infra (Jakub Beránek)

Task owners

James, Jakub Beránek, David Wood

1 detailed update available.

Comment by @Jamesbarford posted on 2025-09-17:

It is possible to now run the system with two different machines on two different architectures however there is work to be done to make this more robust.

We have worked on ironing out the last bits and pieces for dequeuing benchmarks as well as creating a new user interface to reflect multiple collectors doing work. Presently work is mostly on polishing the UI and handing edge cases through manual testing.

Queue Work:

  • https://github.com/rust-lang/rustc-perf/pull/2212
  • https://github.com/rust-lang/rustc-perf/pull/2214
  • https://github.com/rust-lang/rustc-perf/pull/2216
  • https://github.com/rust-lang/rustc-perf/pull/2221
  • https://github.com/rust-lang/rustc-perf/pull/2226
  • https://github.com/rust-lang/rustc-perf/pull/2230
  • https://github.com/rust-lang/rustc-perf/pull/2231

Ui:

  • https://github.com/rust-lang/rustc-perf/pull/2217
  • https://github.com/rust-lang/rustc-perf/pull/2220
  • https://github.com/rust-lang/rustc-perf/pull/2224
  • https://github.com/rust-lang/rustc-perf/pull/2227
  • https://github.com/rust-lang/rustc-perf/pull/2232
  • https://github.com/rust-lang/rustc-perf/pull/2233
  • https://github.com/rust-lang/rustc-perf/pull/2236
Stabilize public/private dependencies (rust-lang/rust-project-goals#272)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page)

Task owners

Help wanted, Ed Page

No detailed updates available.
Stabilize rustdoc `doc_cfg` feature (rust-lang/rust-project-goals#404)
Progress
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

Task owners

Guillaume Gomez

No detailed updates available.
SVE and SME on AArch64 (rust-lang/rust-project-goals#270)
Progress
Point of contact

David Wood

Champions

compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)

Task owners

David Wood

No detailed updates available.
Type System Documentation (rust-lang/rust-project-goals#405)
Progress
Point of contact

Boxy

Champions

types (Boxy)

Task owners

Boxy, lcnr

No detailed updates available.
Progress
Point of contact

Jack Wrenn

Champions

compiler (Jack Wrenn), lang (Scott McMurray)

Task owners

Jacob Pratt, Jack Wrenn, Luca Versari

No detailed updates available.

Mozilla ThunderbirdThunderbird Adds Native Microsoft Exchange Email Support

If your organization uses Microsoft Exchange-based email, you’ll be happy to hear that Thunderbird’s latest monthly Release version 145, now officially supports native access via the Exchange Web Services (EWS) protocol. With EWS now built directly into Thunderbird, a third-party add-on is no longer required for email functionality. Calendar and address book support for Exchange accounts remain on the roadmap, but email integration is here and ready to use!

What changes for Thunderbird users

Until now, Thunderbird users in Exchange hosted environments often relied on IMAP/POP protocols or third-party extensions. With full native Exchange support for email, Thunderbird now works more seamlessly in Exchange environments, including full folder listings, message synchronization, folder management both locally and on the server, attachment handling, and more. This simplifies life for users who depend on Exchange for email but prefer Thunderbird as their client.

How to get started

For many people switching from Outlook to Thunderbird, the most common setup involves Microsoft-hosted Exchange accounts such as Microsoft 365 or Office 365. Thunderbird now uses Microsoft’s standard sign-in process (OAuth2) and automatically detects your account settings, so you can start using your email right away without any extra setup.

If this applies to you, setup is straightforward:

  1. Create a new account in Thunderbird 145 or newer.
  2. In the new Account Hub, select Exchange (or Exchange Web Services in legacy setup).
  3. Let Thunderbird handle the rest!

Important note: If you see something different, or need more details or advice, please see our support page and wiki page. Also, some authentication configurations are not supported yet and you may need to wait for a further update that expands compatibility, please refer to the table below for more details. 

What functionality is supported now and what’s coming soon

As mentioned earlier, EWS support in version 145 currently enables email functionality only. Calendar and address book integration are in active development and will be added in future releases. The chart below provides an at-a-glance view of what’s supported today.

Feature areaSupported nowNot yet supported
Email – account setup & folder access✅ Creating accounts via auto-config with EWS, server-side folder manipulation 
Email – message operations✅ Viewing messages, sending, replying/forwarding, moving/copying/deleting
Email – attachments✅ Attachments can be saved and displayed with detach/delete support.
Search & filtering✅ Search subject and body, quick filtering❌ Filter actions requiring full body content are not yet supported.
Accounts hosted on Microsoft 365✅ Domains using the standard Microsoft OAuth2 endpoint❌ Domains requiring custom OAuth2 application and tenant IDs will be supported in the future.
Accounts hosted on-premise✅ Password-based Basic authentication❌ Password-based NTLM authentication and OAuth2 for on-premise servers are on the roadmap.
Calendar support❌ Not yet implemented – calendar syncing is on the roadmap.
Address book / contacts support❌ Not yet implemented – address book support is on the roadmap.
Microsoft Graph support❌ Not yet implemented – Microsoft Graph integration will be added in the future.

Exchange Web Services and Microsoft Graph

While many people and organizations still rely on Exchange Web Services (EWS), Microsoft has begun gradually phasing it out in favor of a newer, more modern interface called Microsoft Graph. Microsoft has stated that EWS will continue to be supported for the foreseeable future, but over time, Microsoft Graph will become the primary way to connect to Microsoft 365 services.

Because EWS remains widely used today, we wanted to ensure full support for it first to ensure compatibility for existing users. At the same time, we’re actively working to add support for Microsoft Graph, so Thunderbird will be ready as Microsoft transitions to its new standard.

Looking ahead

While Exchange email is available now, calendar and address book integration is on the way, bringing Thunderbird closer to being a complete solution for Exchange users. For many people, having reliable email access is the most important step, but if you depend on calendar and contact synchronization, we’re working hard to bring this to Thunderbird in the near future, making Thunderbird a strong alternative to Outlook.

Keep an eye on future releases for additional support and integrations, but in the meantime, enjoy a smoother Exchange email experience within your favorite email client!


If you want to know more about Exchange support in Thunderbird, please refer to the dedicated page on support.mozilla.org. Organization admins can also find out more on the Mozilla wiki page. To follow ongoing and future work in this area, please refer to the relevant meta-bug on Bugzilla.

The post Thunderbird Adds Native Microsoft Exchange Email Support appeared first on The Thunderbird Blog.

The Rust Programming Language BlogGoogle Summer of Code 2025 results

As we have announced previously this year, the Rust Project participated in Google Summer of Code (GSoC) for the second time. Almost twenty contributors have been working very hard on their projects for several months. Same as last year, the projects had various durations, so some of them have ended in September, while the last ones have been concluded in the middle of November. Now that the final reports of all projects have been submitted, we are happy to announce that 18 out of 19 projects have been successful! We had a very large number of projects this year, so we consider this number of successfully finished projects to be a great result.

We had awesome interactions with our GSoC contributors over the summer, and through a video call, we also had a chance to see each other and discuss the accepted GSoC projects. Our contributors have learned a lot of new things and collaborated with us on making Rust better for everyone, and we are very grateful for all their contributions! Some of them have even continued contributing after their project has ended, and we hope to keep working with them in the future, to further improve open-source Rust software. We would like to thank all our Rust GSoC 2025 contributors. You did a great job!

Same as last year, Google Summer of Code 2025 was overall a success for the Rust Project, this time with more than double the number of projects. We think that GSoC is a great way of introducing new contributors to our community, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our GSoC project idea list and our guide for new contributors.

Below you can find a brief summary of our GSoC 2025 projects. You can find more information about the original goals of the projects here. For easier navigation, here is an index of the project descriptions in alphabetical order:

And now strap in, as there is a ton of great content to read about here!

ABI/Layout handling for the automatic differentiation feature

The std::autodiff module allows computing gradients and derivatives in the calculus sense. It provides two autodiff macros, which can be applied to user-written functions and automatically generate modified versions of those functions, which also compute the requested gradients and derivatives. This functionality is very useful especially in the context of scientific computing and implementation of machine-learning models.

Our autodiff frontend was facing two challenges.

  • First, we would generate a new function through our macro expansion, however, we would not have a suitable function body for it yet. Our autodiff implementation relies on an LLVM plugin to generate the function body. However, this plugin only gets called towards the end of the compilation pipeline. Earlier optimization passes, either on the LLVM or the Rust side, could look at the placeholder body and either "optimize" or even delete the function since it has no clear purpose yet.
  • Second, the flexibility of our macros was causing issues, since it allows requesting derivative computations on a per-argument basis. However, when we start to lower Rust arguments to our compiler backends like LLVM, we do not always have a 1:1 match of Rust arguments to LLVM arguments. As a simple example, an array with two double values might be passed as two individual double values on LLVM level, whereas an array with three doubles might be passed via a pointer.

Marcelo helped rewrite our autodiff macros to not generate hacky placeholder function bodies, but instead introduced a proper autodiff intrinsic. This is the proper way for us to declare that an implementation of this function is not available yet and will be provided later in the compilation pipeline. As a consequence, our generated functions were not deleted or incorrectly optimized anymore. The intrinsic PR also allowed removing some previous hacks and therefore reduced the total lines of code in the Rust compiler by over 500! You can find more details in this PR.

Beyond autodiff work, Marcelo also initiated work on GPU offloading intrinsics, and helped with multiple bugs in our argument handling. We would like to thank Marcelo for all his great work!

Add safety contracts

The Rust Project has an ambitious goal to instrument the Rust standard library with safety contracts, moving from informal comments that specify safety requirements of unsafe functions to executable Rust code. This transformation represents a significant step toward making Rust's safety guarantees more explicit and verifiable. To prioritize which functions should receive contracts first, there is a verification contest ongoing.

Given that Rust contracts are still in their early stages, Dawid's project was intentionally open-ended in scope and direction. This flexibility allowed Dawid to identify and tackle several key areas that would add substantial value to the contracts ecosystem. His contributions were in the following three main areas:

  • Pragmatic Contracts Integration: Refactoring contract HIR lowering to ensure no contract code is executed when contract-checks are disabled. This has major impact as it ensures that contracts do not have runtime cost when contract checks are disabled.

  • Variable Reference Capability: Adding the ability to refer to variables from preconditions within postconditions. This fundamental enhancement to the contracts system has been fully implemented and merged into the compiler. This feature provides developers with much more expressive power when writing contracts, allowing them to establish relationships between input and output states.

  • Separation Logic Integration: The bulk of Dawid's project involved identifying, understanding, and planning the introduction of owned and block ownership predicates for separation-logic style reasoning in contracts for unsafe Rust code. This work required extensive research and collaboration with experts in the field. Dawid engaged in multiple discussions with authors of Rust validation tools and Miri developers, both in person and through Zulip discussion threads. The culmination of this research is captured in a comprehensive MCP (Major Change Proposal) that Dawid created.

Dawid's work represents crucial foundational progress for Rust's safety contracts initiative. By successfully implementing variable reference capabilities and laying the groundwork for separation logic integration, he has positioned the contracts feature for significant future development. His research and design work will undoubtedly influence the direction of this important safety feature as it continues to mature. Thank you very much!

Bootstrap of rustc with rustc_codegen_gcc

The goal of this project was to improve the Rust GCC codegen backend (rustc_codegen_gcc), so that it would be able to compile the "stage 2"1 Rust compiler (rustc) itself again.

You might remember that Michał already participated in GSoC last year, where he was working on his own .NET Rust codegen backend, and he did an incredible amount of work. This year, his progress was somehow even faster. Even before the official GSoC implementation period started (!), he essentially completed his original project goal and managed to build rustc with GCC. This was no small feat, as he had to investigate and fix several miscompilations that occurred when functions marked with #[inline(always)] were called recursively or when the compiled program was trying to work with 128-bit integers. You can read more about this initial work at his blog.

After that, he immediately started working on stretch goals of his project. The first one was to get a "stage-3" rustc build working, for which he had to vastly improve the memory consumption of the codegen backend.

Once that was done, he moved on to yet another goal, which was to build rustc for a platform not supported by LLVM. He made progress on this for Dec Alpha and m68k. He also attempted to compile rustc on Aarch64, which led to him finding an ABI bug. Ultimately, he managed to build a rustc for m68k (with a few workarounds that we will need to fix in the future). That is a very nice first step to porting Rust to new platforms unsupported by LLVM, and is important for initiatives such as Rust for Linux.

Michał had to spend a lot of time starting into assembly code and investigating arcane ABI problems. In order to make this easier for everyone, he implemented support for fuzzing and automatically checking ABI mismatches in the GCC codegen backend. You can read more about his testing and fuzzing efforts here.

We were really impressed with what Michał was able to achieve, and we really appreciated working with him this summer. Thank you for all your work, Michał!

Cargo: Build script delegation

Cargo build scripts come at a compile-time cost, because even to run cargo check, they must be built as if you ran cargo build, so that they can be executed during compilation. Even though we try to identify ways to reduce the need to write build scripts in the first place, that may not always be doable. However, if we could shift build scripts from being defined in every package that needs them, into a few core build script packages, we could both reduce the compile-time overhead, and also improve their auditability and transparency. You can find more information about this idea here.

The first step required to delegate build scripts to packages is to be able to run multiple build scripts per crate, so that is what Naman was primarily working on. He introduced a new unstable multiple-build-scripts feature to Cargo, implemented support for parsing an array of build scripts in Cargo.toml, and extended Cargo so that it can now execute multiple build scripts while building a single crate. He also added a set of tests to ensure that this feature will work as we expect it to.

Then he worked on ensuring that the execution of builds scripts is performed in a deterministic order, and that crates can access the output of each build script separately. For example, if you have the following configuration:

[package]
build = ["windows-manifest.rs", "release-info.rs"]

then the corresponding crate is able to access the OUT_DIRs of both build scripts using env!("windows-manifest_OUT_DIR") and env!("release-info_OUTDIR").

As future work, we would like to implement the ability to pass parameters to build scripts through metadata specified in Cargo.toml and then implement the actual build script delegation to external build scripts using artifact-dependencies.

We would like to thank Naman for helping improving Cargo and laying the groundwork for a feature that could have compile-time benefits across the Rust ecosystem!

Distributed and resource-efficient verification

The goal of this project was to address critical scalability challenges of formally verifying Rust's standard library by developing a distributed verification system that intelligently manages computational resources and minimizes redundant work. The Rust standard library verification project faces significant computational overhead when verifying large codebases, as traditional approaches re-verify unchanged code components. With Rust's standard library containing thousands of functions and continuous development cycles, this inefficiency becomes a major bottleneck for practical formal verification adoption.

Jiping implemented a distributed verification system with several key innovations:

  • Intelligent Change Detection: The system uses hash-based analysis to identify which parts of the codebase have actually changed, allowing verification to focus only on modified components and their dependencies.
  • Multi-Tool Orchestration: The project coordinates multiple verification backends including Kani model checker, with careful version pinning and compatibility management.
  • Distributed Architecture: The verification workload is distributed across multiple compute nodes, with intelligent scheduling that considers both computational requirements and dependency graphs.
  • Real-time Visualization: Jiping built a comprehensive web interface that provides live verification status, interactive charts, and detailed proof results. You can check it out here!

You can find the created distributed verification tool in this repository. Jiping's work established a foundation for scalable formal verification that can adapt to the growing complexity of Rust's ecosystem, while maintaining verification quality and completeness, which will go a long way towards ensuring that Rust's standard library remains safe and sound. Thank you for your great work!

Enable Witness Generation in cargo-semver-checks

cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. Talyn's project aimed to lay the groundwork for it to tackle our most vexing limitation: the inability to catch SemVer breakage due to type changes.

Imagine a crate makes the following change to its public API:

// baseline version
pub fn example(value: i64) {}

// new version
pub fn example(value: String) {}

This is clearly a major breaking change, right? And yet cargo-semver-checks with its hundreds of lints is still unable to flag this. While this case seems trivial, it's just the tip of an enormous iceberg. Instead of changing i64 to String, what if the change was from i64 to impl Into<i64>, or worse, into some monstrosity like:

pub fn example<T, U, const N: usize>(
    value: impl for<'a> First<'a, T> + Second<U, N> + Sync
) {}

Figuring out whether this change is breaking requires checking whether the original i64 parameter type can "fit" into that monstrosity of an impl Trait type. But reimplementing a Rust type checker and trait solver inside cargo-semver-checks is out of the question! Instead, we turn to a technique created for a previous study of SemVer breakage on crates.io—we generate a "witness" program that will fail to compile if, and only if, there's a breaking change between the two versions.

The witness program is a separate crate that can be made to depend on either the old or the new version of the crate being scanned. If our example function comes from a crate called upstream, its witness program would look something like:

// take the same parameter type as the baseline version
fn witness(value: i64) {
    upstream::example(value);
}

This example is cherry-picked to be easy to understand. Witness programs are rarely this straightforward!

Attempting to cargo check the witness while plugging in the new version of upstream forces the Rust compiler to decide whether i64 matches the new impl Trait parameter. If cargo check passes without errors, there's no breaking change here. But if there's a compilation error, then this is concrete, incontrovertible evidence of breakage!

Over the past 22+ weeks, Talyn worked tirelessly to move this from an idea to a working proof of concept. For every problem we foresaw needing to solve, ten more emerged along the way. Talyn did a lot of design work to figure out an approach that would be able to deal with crates coming from various sources (crates.io, a path on disk, a git revision), would support multiple rustdoc JSON formats for all the hundreds of existing lints, and do so in a fashion that doesn't get in the way of adding hundreds more lints in the future.

Even the above list of daunting challenges fails to do justice to the complexity of this project. Talyn created a witness generation prototype that lays the groundwork for robust checking of type-related SemVer breakages in the future. The success of this work is key to the cargo-semver-checks roadmap for 2026 and beyond. We would like to thank Talyn for their work, and we hope to continue working with them on improving witness generation in the future.

Extend behavioural testing of std::arch intrinsics

The std::arch module contains target-specific intrinsics (low-level functions that typically correspond to single machine instructions) which are intended to be used by other libraries. These are intended to match the equivalent intrinsics available as vendor-specific extensions in C.

The intrinsics are tested with three approaches. We test that:

  • The signatures of the intrinsics match the one specified by the architecture.
  • The intrinsics generate the correct instruction.
  • The intrinsics have the correct runtime behavior.

These behavior tests are implemented in the intrinsics-test crate. Initially, this test framework only covered the AArch64 and AArch32 targets, where it was very useful in finding bugs in the implementation of the intrinsics. Madhav's project was about refactoring and improving this framework to make it easier (or really, possible) to extend it to other CPU architectures.

First, Madhav split the codebase into a module with shared (architecturally independent) code and a module with ARM-specific logic. Then he implemented support for testing intrinsics for the x86 architecture, which is Rust's most widely used target. In doing so, he allowed us to discover real bugs in the implementation of some intrinsics, which is a great result! Madhav also did a lot of work in optimizing how the test suite is compiled and executed, to reduce CI time needed to run tests, and he laid the groundwork for supporting even more architectures, specifically LoongArch and WebAssembly.

We would like to thank Madhav for all his work on helping us make sure that Rust intrinsics are safe and correct!

Implement merge functionality in bors

The main Rust repository uses a pull request merge queue bot that we call bors. Its current Python implementation has a lot of issues and was difficult to maintain. The goal of this GSoC project was thus to implement the primary merge queue functionality in our Rust rewrite of this bot.

Sakibul first examined the original Python codebase to figure out what it was doing, and then he implemented several bot commands that allow contributors to approve PRs, set their priority, delegate approval rights, temporarily close the merge tree, and many others. He also implemented an asynchronous background process that checks whether a given pull request is mergeable or not (this process is relatively involved, due to how GitHub works), which required implementing a specialized synchronized queue for deduplicating mergeability check requests to avoid overloading the GitHub API. Furthermore, Sakibul also reimplemented (a nicer version of) the merge queue status webpage that can be used to track which pull requests are currently being tested on CI, which ones are approved, etc.

After the groundwork was prepared, Sakibul could work on the merge queue itself, which required him to think about many tricky race conditions and edge cases to ensure that bors doesn't e.g. merge the wrong PR into the default branch or merge a PR multiple times. He covered these edge cases with many integration tests, to give us more confidence that the merge queue will work as we expect it to, and also prepared a script for creating simulated PRs on a test GitHub repository so that we can test bors "in the wild". And so far, it seems to be working very well!

After we finish the final piece of the merge logic (creating so-called "rollups") together with Sakibul, we will start using bors fully in the main Rust repository. Sakibul's work will thus be used to merge all rust-lang/rust pull requests. Exciting!

Apart from working on the merge queue, Sakibul made many other awesome contributions to the codebase, like refactoring the test suite or analyzing performance of SQL queries. In total, Sakibul sent around fifty pull requests that were already merged into bors! What can we say, other than: Awesome work Sakibul, thank you!

Improve bootstrap

bootstrap is the build system of Rust itself, which is responsible for building the compiler, standard library, and pretty much everything else that you can download through rustup. This project's goal was very open-ended: "improve bootstrap".

And Shourya did just that! He made meaningful contributions to several parts of bootstrap. First, he added much-needed documentation to several core bootstrap data structures and modules, which were quite opaque and hard to understand without any docs. Then he moved to improving command execution, as each bootstrap invocation invokes hundreds of external binaries, and it was difficult to track them. Shourya finished a long-standing refactoring that routes almost all executed commands through a single place. This allowed him to also implement command caching and also command profiling, which shows us which commands are the slowest.

After that, Shourya moved on to refactoring config parsing. This was no easy task, because bootstrap has A LOT of config options; the single function that parses them had over a thousand lines of code (!). A set of complicated config precedence rules was frequently causing bugs when we had to modify that function. It took him several weeks to untangle this mess, but the result is worth it. The refactored function is much less brittle and easier to understand and modify, which is great for future maintenance.

The final area that Shourya improved were bootstrap tests. He made it possible to run them using bare cargo, which enables debugging them e.g. in an IDE, which is very useful, and mainly he found a way to run the tests in parallel, which makes contributing to bootstrap itself much more pleasant, as it reduced the time to execute the tests from a minute to under ten seconds. These changes required refactoring many bootstrap tests that were using global state, which was not compatible with parallel execution.

Overall, Shourya made more than 30 PRs to bootstrap since April! We are very thankful for all his contributions, as they made bootstrap much easier to maintain. Thank you!

Improve Wild linker test suites

Wild is a very fast linker for Linux that’s written in Rust. It can be used to build executables and shared objects.

Kei’s project was to leverage the test suite of one of the other Linux linkers to help test the Wild linker. This goal was accomplished. Thanks to Kei’s efforts, we now run the Mold test suite against Wild in our CI. This has helped to prevent regressions on at least a couple of occasions and has also helped to show places where Wild has room for improvement.

In addition to this core work, Kei also undertook numerous other changes to Wild during GSoC. Of particular note was the reworking of argument parsing to support --help, which we had wanted for some time. Kei also fixed a number of bugs and implemented various previously missing features. This work has helped to expand the range of projects that can use Wild to build executables.

Kei has continued to contribute to Wild even after the GSoC project finished and has now contributed over seventy PRs. We thank Kei for all the hard work and look forward to continued collaboration in the future!

Improving the Rustc Parallel Frontend: Parallel Macro Expansion

The Rust compiler has a (currently unstable) parallel compilation mode in which some compiler passes run in parallel. One major part of the compiler that is not yet affected by parallelization is name resolution. It has several components, but those selected for this GSoC project were import resolution and macro expansion (which are in fact intermingled into a single fixed-point algorithm). Besides the parallelization itself, another important point of the work was improving the correctness of import resolution by eliminating accidental order dependencies in it, as those also prevent parallelization.

We should note that this was a very ambitious project, and we knew from the beginning that it would likely be quite challenging to reach the end goal within the span of just a few months. And indeed, Lorrens did in fact run into several unexpected issues that showed us that the complexity of this work is well beyond a single GSoC project, so he didn't actually get to parallelizing the macro expansion algorithm. Nevertheless, he did a lot of important work to improve the name resolver and prepare it for being parallelized.

The first thing that Lorrens had to do was actually understand how Rust name resolution works and how it is implemented in the compiler. That is, to put it mildly, a very complex piece of logic, and is affected by legacy burden in the form of backward compatibility lints, outdated naming conventions, and other technical debt. Even this learned knowledge itself is incredibly useful, as the set of people that understand Rust's name resolution today is very low, so it is important to grow it.

Using this knowledge, he made a lot of refactorings to separate significant mutability in name resolver data structures from "cache-like" mutability used for things like lazily loading otherwise immutable data from extern crates, which was needed to unblock parallelization work. He split various parts of the name resolver, got rid of unnecessary mutability and performed a bunch of other refactorings. He also had to come up with a very tricky data structure that allows providing conditional mutable access to some data.

These refactorings allowed him to implement something called "batched import resolution", which splits unresolved imports in the crate into "batches", where all imports in a single batch can be resolved independently and potentially in parallel, which is crucial for parallelizing name resolution. We have to resolve a few remaining language compatibility issues, after which the batched import resolution work will hopefully be merged.

Lorrens laid an important groundwork for fixing potential correctness issues around name resolution and macro expansion, which unblocks further work on parallelizing these compiler passes, which is exciting. His work also helped unblock some library improvements that were stuck for a long time. We are grateful for your hard work on improving tricky parts of Rust and its compiler, Lorrens. Thank you!

Make cargo-semver-checks faster

cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. It is adding SemVer lints at an exponential pace: the number of lints has been doubling every year, and currently stands at 229. More lints mean more work for cargo-semver-checks to do, as well as more work for its test suite which runs over 250000 lint checks!

Joseph's contributions took three forms:

  • Improving cargo-semver-checks runtime performance—on large crates, our query runtime went from ~8s to ~2s, a 4x improvement!
  • Improving the test suite's performance, enabling us to iterate faster. Our test suite used to take ~7min and now finishes in ~1min, a 7x improvement!
  • Improving our ability to profile query performance and inspect performance anomalies, both of which were proving a bottleneck for our ability to ship further improvements.

Joseph described all the clever optimization tricks leading to these results in his final report. To encourage you to check out the post, we'll highlight a particularly elegant optimization described there.

cargo-semver-checks relies on rustdoc JSON, an unstable component of Rust whose output format often has breaking changes. Since each release of cargo-semver-checks supports a range of Rust versions, it must also support a range of rustdoc JSON formats. Fortunately, each file carries a version number that tells us which version's serde types to use to deserialize the data.

Previously, we used to deserialize the JSON file twice: once with a serde type that only loaded the format_version: u32 field, and a second time with the appropriate serde type that matches the format. This works fine, but many large crates generate rustdoc JSON files that are 500 MiB+ in size, requiring us to walk all that data twice. While serde is quite fast, there's nothing as fast as not doing the work twice in the first place!

So we used a trick: optimistically check if the format_version field is the last field in the JSON file, which happens to be the case every time (even though it is not guaranteed). Rather than parsing JSON, we merely look for a , character in the last few dozen bytes, then look for : after the , character, and for format_version between them. If this is successful, we've discovered the version number while avoiding going through hundreds of MB of data! If we failed for any reason, we just fall back to the original approach having only wasted the effort of looking at 20ish extra bytes.

Joseph did a lot of profiling and performance optimizations to make cargo-semver-checks faster for everyone, with awesome results. Thank you very much for your work!

Make Rustup Concurrent

As a very important part of the Rustup team's vision of migrating the rustup codebase to using async IO since the introduction of the global tokio runtime in #3367, this project's goal was to introduce proper concurrency to rustup. Francisco did that by attacking two aspects of the codebase at once:

  1. He created a new set of user interfaces for displaying concurrent progress.
  2. He implemented a new toolchain update checking & installation flow that is idiomatically concurrent.

As a warmup, Francisco made rustup check concurrent, resulting in a rather easy 3x performance boost in certain cases. Along the way, he also introduced a new indicatif-based progress bar for reporting progress of concurrent operations, which replaced the original hand-rolled solution.

After that, the focus of the project has moved on to the toolchain installation flow used in commands like rustup toolchain install and rustup update. In this part, Francisco developed two main improvements:

  1. The possibility of downloading multiple components at once when setting up a toolchain, controlled by the RUSTUP_CONCURRENT_DOWNLOADS environment variable. Setting this variable to a value greater than 1 is particularly useful in certain internet environments where the speed of a single download connection could be restricted by QoS (Quality of Service) limits.
  2. The ability to interleave component network downloads and disk unpacking. For the moment, unpacking will still happen sequentially, but disk and net I/O can finally be overlapped! This introduces a net gain in toolchain installation time, as only the last component being downloaded will have noticeable unpacking delays. In our tests, this typically results in a reduction of 4-6 seconds (on fast connections, that's ~33% faster!) when setting up a toolchain with the default profile.

We have to say that these results are very impressive! While a few seconds shorter toolchain installation might not look so important at a first glance, rustup is ubiquitously used to install Rust toolchains on CI of tens of thousands of Rust projects, so this improvement (and also further improvements that it unlocks) will have an enormous effect across the Rust ecosystem. Many thanks to Francisco Gouveia's enthusiasm and active participation, without which this wouldn't have worked out!

Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices

The snapshot-based UI test suite is a crucial part of the Rust compiler's test suite. It contains a lot of tests: over 19000 at the time of writing. The organization of this test suite is thus very important, for at least two reasons:

  1. We want to be able to find specific tests, identify related tests, and have some sort of logical grouping of related tests.
  2. We have to ensure that no directory contains so many entries such that GitHub gives up rendering the directory.

Furthermore, having informative test names and having some context for each test is particularly important, as otherwise contributors would have to reverse-engineer test intent from git blame and friends.

Over the years, we have accumulated a lot of unorganized stray test files in the top level tests/ui directory, and have a lot of generically named issue-*.rs tests in the tests/ui/issues/ directory. The former makes it annoying to find more meaningful subdirectories, while the latter makes it completely non-obvious what each test is about.

Julien's project was about introducing some order into the chaos. And that was indeed achieved! Through Julien's efforts (in conjunction with efforts from other contributors), we now have:

  • No more stray tests under the immediate tests/ui/ top-level directory, and are organized into more meaningful subdirectories. We were able to then introduce a style check to prevent new stray tests from being added.
  • A top-level document contains TL;DRs for each of the immediate subdirectories.
  • Substantially fewer generically-named issue-*.rsunder tests/ui/issues/.

Test organization (and more generally, test suite ergonomics) is an often under- appreciated aspect of maintaining complex codebases. Julien spent a lot of effort improving test ergonomics of the Rust compiler, both in last year's GSoC (where he vastly improved our "run-make" test suite), and then again this year, where he made our UI test suite more ergonomic. We would like to appreciate your meticulous work, Julien! Thank you very much.

Modernising the libc Crate

libc is a crucial crate in the Rust ecosystem (on average, it has ~1.5 million daily downloads), providing bindings to system C API. This GSoC project had two goals: improve testing for what we currently have, and make progress toward a stable 1.0 release of libc.

Test generation is handled by the ctest crate, which creates unit tests that compare properties of Rust API to properties of the C interfaces it binds. Prior to the project, ctest used an obsolete Rust parser that had stopped receiving major updates about eight years ago, meaning libc could not easily use any syntax newer than that. Abdul completely rewrote ctest to use syn as its parser and make it much easier to add new tests, then went through and switched everything over to the more modern ctest. After this change, we were able to remove a number of hacks that had been needed to work with the old parser.

The other part of the project was to make progress toward the 1.0 release of libc. Abdul helped with this by going through and addressing a number of issues that need to be resolved before the release, many of which were made possible with all the ctest changes.

While there is still a lot of work left to do before libc can reach 1.0, Abdul's improvements will go a long way towards making that work easier, as they give us more confidence in the test suite, which is now much easier to modify and extend. Thank you very much for all your work!

Prepare stable_mir crate for publishing

This project's goal was to prepare the Rust compiler's stable_mir crate (eventually renamed to rustc_public), which provides a way to interface with the Rust compiler for analyzing Rust code, for publication on crates.io. While the existing crate provided easier APIs for tool developers, it lacked proper versioning and was tightly coupled with compiler versions. The goal was to enable independent publication with semantic versioning.

The main technical work involved restructuring rustc_public and rustc_public_bridge (previously named rustc_smir) by inverting their dependency relationship. Makai resolved circular dependencies by temporarily merging the crates and gradually separating them with the new architecture. They also split the existing compiler interface to separate public APIs from internal compiler details.

Furthermore, Makai established infrastructure for dual maintenance: keeping an internal version in the Rust repository to track compiler changes while developing the publishable version in a dedicated repository. Makai automated a system to coordinate between versions, and developed custom tooling to validate compiler version compatibility and to run tests.

Makai successfully completed the core refactoring and infrastructure setup, making it possible to publish rustc_public independently with proper versioning support for the Rust tooling ecosystem! As a bonus, Makai contributed several bug fixes and implemented new APIs that had been requested by the community. Great job Makai!

Prototype an alternative architecture for cargo fix using cargo check

The cargo fix command applies fixes suggested by lints, which makes it useful for cleaning up sloppy code, reducing the annoyance of toolchain upgrades when lints change and helping with edition migrations and new lint adoption. However, it has a number of issues. It can be slow, it only applies a subset of possible lints, and doesn't provide an easy way to select which lints to fix.

These problems are caused by its current architecture; it is implemented as a variant of cargo check that replaces rustc with cargo being run in a special mode that will call rustc in a loop, applying fixes until there are none. While this special rustc-proxy mode is running, a cross-process lock is held to force only one build target to be fixed at a time to avoid race conditions. This ensures correctness at the cost of performance and difficulty in making the rustc-proxy interactive.

Glen implemented a proof of concept of an alternative design called cargo-fixit. cargo fixit spawns cargo check in a loop, determining which build targets are safe to fix in a given pass, and then applying the suggestions. This puts the top-level program in charge of what fixes get applied, making it easier to coordinate. It also allows the locking to be removed and opens the door to an interactive mode.

Glen performed various benchmarks to test how the new approach performs. And in some benchmarks, cargo fixit was able to finish within a few hundred milliseconds, where before the same task took cargo fix almost a minute! As always, there are trade-offs; the new approach comes at the cost that fixes in packages lower in the dependency tree can cause later packages to be rebuilt multiple times, slowing things down, so there were also benchmarks where the old design was a bit faster. The initial results are still very promising and impressive!

Further work remains to be done on cargo-fixit to investigate how it could be optimized better and how should its interface look like before being stabilized. We thank Glen for all the hard work on this project, and we hope that one day the new design will become used by default in Cargo, to bring faster and more flexible fixing of lint suggestions to everyone!

Prototype Cargo Plumbing Commands

The goal of this project was to move forward our Project Goal for creating low-level ("plumbing") Cargo subcommands to make it easier to reuse parts of Cargo by other tools.

Vito created a prototype of several plumbing commands in the cargo-plumbing crate. The idea was to better understand how the plumbing commands should look like, and what is needed from Cargo to implement them. Vito had to make compromises in some of these commands to not be blocked on making changes to the current Cargo Rust APIs, and he helpfully documented those blockers. For example, instead of solely relying on the manifests that the user passed in, the plumbing commands will re-read the manifests within each command, preventing callers from being able to edit them to get specific behavior out of Cargo, e.g. dropping all workspace members to allow resolving dependencies on a per-package basis.

Vito did a lot of work, as he implemented seven different plumbing subcommands:

  • locate-manifest
  • read-manifest
  • read-lockfile
  • lock-dependencies
  • write-lockfile
  • resolve-features
  • plan-build

As future work, we would like to deal with some unresolved questions around how to integrate these plumbing commands within Cargo itself, and extend the set of plumbing commands.

We thank Vito for all his work on improving the flexibility of Cargo.

Conclusion

We would like to thank all contributors that have participated in Google Summer of Code 2025 with us! It was a blast, and we cannot wait to see which projects GSoC contributors will come up with in the next year. We would also like to thank Google for organizing the Google Summer of Code program and for allowing us to have so many projects this year. And last, but not least, we would like to thank all the Rust mentors who were tirelessly helping our contributors to complete their projects. Without you, Rust GSoC would not be possible.

  1. You can read about what do those individual compiler stages mean e.g. here.

The Mozilla BlogFirefox tab groups just got an upgrade, thanks to your feedback

Firefox tab grouping with cursor selecting “Recipes” and a dropdown list; “Paris Trip” group visible

Tab groups have become one of Firefox’s most loved ways to stay organized — over 18 million people have used the feature since it launched earlier this year. Since then, we’ve been listening closely to feedback from the Mozilla Connect community to make this long-awaited feature even more helpful.

We’ve just concluded a round of highly requested tab groups updates that make it easier than ever to stay focused, organized, and productive. Check out what we’ve been up to, and if you haven’t tried tab groups yet, here’s a helpful starting guide. 

Preview tab group contents on hover

Starting in Firefox 145, you can peek inside a group without expanding it. Whether you’re checking a stash of tabs set aside for deep research or quickly scanning a group to find the right meeting notes doc, hover previews give you the context you need — instantly.

Keep the active tab visible in a collapsed group — and drag tabs into it

Since Firefox 142, when you collapse a group, the tab you’re working in remains visible. It’s a small but mighty improvement that reduces interruptions. And, starting in Firefox 143, you can drag a tab directly into a collapsed group without expanding it. It’s a quick, intuitive way to stay organized while reducing on-screen clutter.

Each of these ideas came from your feedback on Mozilla Connect. We’re grateful for your engagement, creativity, and patience as our team works to improve Tab Groups.

What’s next for tab groups

We’ve got a big, healthy stash of great ideas and suggestions to explore, but we’d love to hear more from you on two areas of long-term interest: 

  • Improving the usefulness and ease of use of saved tab groups. We’re curious how you’re using them and how we can make the experience more helpful to you. What benefits do they bring to your workflow compared to bookmarks? 
  • Workspaces. Some of you have requested a way to separate contexts by creating workspaces — sets of tabs and tab groups that are entirely isolated from each other, yet remain available within a single browser window. We are curious about your workspace use cases and where context separation via window management or profiles doesn’t meet your workflow needs. Is collaboration an important feature of the workspaces for you? 

Have ideas and suggestions? Let us know in this Mozilla Connect thread!

Take control of your internet

Download Firefox

The post Firefox tab groups just got an upgrade, thanks to your feedback appeared first on The Mozilla Blog.

The Rust Programming Language BlogLaunching the 2025 State of Rust Survey

It’s time for the 2025 State of Rust Survey!

The Rust Project has been collecting valuable information about the Rust programming language community through our annual State of Rust Survey since 2016. Which means that this year marks the tenth edition of this survey!

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. The results will allow us to more deeply understand the global Rust community and how it evolves over time.

Like last year, the 2025 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until December 17. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.

We are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Chinese (Simplified)
  • Chinese (Traditional)
  • French
  • German
  • Japanese
  • Ukrainian
  • Russian
  • Spanish
  • Portuguese (Brazil)

Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of the Rust Survey Team, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):

Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

By the way, the Rust Survey team is looking for new members. If you like working with data and coordinating people, and would like to help us out with managing various Rust surveys, please drop by our Zulip channel and say hi.

Mozilla ThunderbirdVIDEO: An Android Retrospective

If you can believe it, Thunderbird for Android has been out for just over a year! In this episode of our Community Office Hours, Heather and Monica check back in with the mobile team after our chat with them back in January. Sr. Software Engineer Wolf Montwé and our new Manager of Mobile Apps, Jon Bott look back at what the growing mobile team has been able to accomplish this last year, what we’re still working on, and what’s up ahead. 

We’ll be back next month, talking with members of the desktop team all about Exchange support landing in Thunderbird 145!

Thunderbird for Android: One Year Later

The biggest visual change to the app since last year is the new Account Drawer. The mobile team wants to help users easily tell their accounts apart and switch between them. While this is still a work in progress, we’ve started making these changes in Thunderbird 11.0. We know not everyone is excited about UI changes, but we hope most users like these initial changes! 

Another major but hidden change involves updating our very old code, which came from K-9 Mail. Much of the K-9 code goes back to 2009! Having to work with old code explains why some fixes or new features, which should be simple, turn out to be complex and time consuming. Changes end up affecting more components than we expect, which cause delivery timelines to change from a week to months. 

We are also still working to proactively eliminate tech debt, which will make the code more reliable and secure, plus allow future improvements and feature additions to be done more quickly. Even though the team didn’t eliminate as much tech debt as they planned, they feel the work they’ve done this year will help reduce even more next year.

Over this past year, the team has also realized Thunderbird for Android users have different needs from K-9 Mail users. Thunderbird desktop users want more features from the desktop app, and this is definitely a major goal we have for our future development. The current feature gap won’t always be here!

Recently, the mobile team has started moving to a monthly release cadence, similar to Firefox and the monthly Thunderbird channel. Changing from bi-monthly to monthly reduces the risks of changing huge amounts of code all at once. The team can make more incremental changes, like the account drawer, in a smaller window. Regular, “bite size” changes allow us to have more conversation with the community. The development team also benefits because they can make better timelines and can more accurately predict the amount of  work needed to ship future releases.

A Growing Team and Community

Since we released the Android app, the mobile team and contributor community has grown! One of the unexpected benefits of growing the team and community has been improved documentation. Documentation makes things visible for our talented engineers and existing volunteers, and makes it easier for newcomers to join the project!

Our volunteers have made some incredible contributions to the app! Translators have not only bolstered popular languages like German and French, but have enabled previously unsupported languages. In addition to localization, community members have helped develop the app. Shamin-emon has taken on complicated changes, and has been very patient when some of his proposed changes were delayed. Arnt, another community member, debugged and patched an issue with utf-8 strings in IMAP. And Platform34 triaged numerous issues to give developers insights into reported bugs.

Finally, we’re learning how to balance refactoring and improving an Android app, and at the same time building an iOS app from scratch! Both apps are important, but the team has had to think about what’s most important in each app. Android development is focusing on prioritizing top bugs and splitting the work to fix them into bite size pieces. With iOS, the team can develop in small increments from the start. Fortunately, the growing team and engaged community is making this balancing act easier than it would have been a year ago.

Looking Forward

In the next year, what can Android users look forward to? At the top of the priority list is better architecture leading to a better user experience, along with view and Message List improvements, HTML signatures, and JMAP support. For the iOS app, the team is focused on getting basic functionality like place, such as reading and writing mail, attachments, and work on the JMAP and IMAP protocols.

VIDEO (Also on Peertube):

Listen to the Episode

The post VIDEO: An Android Retrospective appeared first on The Thunderbird Blog.

The Servo BlogOctober in Servo: better for the web, better for embedders, better for you

Servo now supports several new web platform features:

servoshell nightly showing new support for CompressionStream and synthetic bold

servoshell for macOS now ships as native Apple Silicon binaries (@jschwe, #39981). Building servoshell for macOS x86-64 still works for now, but is no longer officially supported by automated testing in CI (see § For developers).

In servoshell for Android, you can now enable experimental mode with just two taps (@jdm, #40054), use the software keyboard (@jdm, #40009), deliver touch events to web content (@mrobinson, #40240), and dismiss the location field (@jdm, #40049). Pinch zoom is now fully supported in both Servo and servoshell, taking into account the locations of pinch inputs (@mrobinson, @atbrakhi, #40083) and allowing keyboard scrolling when zoomed in (@mrobinson, @atbrakhi, #40108).

<figcaption>servoshell on Android. Left: you can now turn on experimental mode in the settings menu. Right: we now support the soft keyboard and touch events.</figcaption>

AbortController and AbortSignal are now enabled by default (@jdm, @TimvdLippe, #40079, #39943), after implementing AbortSignal.timeout() (@Taym95, #40032) and fixing throwIfAborted() on AbortSignal (@Taym95, #40224). If this is the first time you’ve heard of them, you might be surprised how important they are for real-world web compat! Over 40% of Google Chrome page loads at least check if they are supported, and many popular websites including GitHub and Discord are broken without them.

XPath is now enabled by default (@simonwuelker, #40212), after implementing ‘@attr/parent’ queries (@simonwuelker, #39749), Copy > XPath in the DevTools Inspector (@simonwuelker, #39892), completely rewriting the parser (@simonwuelker, #39977), and landing several other fixes (@simonwuelker, #40103, #40105, #40161, #40167, #39751, #39764).

Servo now supports new KeyboardEvent({keyCode}) and ({charCode}) (@atbrakhi, #39590), which is enough to get Speedometer 3.0 and 3.1 working on macOS.

servoshell nightly showing Speedometer 3.1 running successfully on macOS

ImageData can now be sent over postMessage() and structuredClone() (@Gae24, #40084).

Layout engine

Our layout engine can now render text in synthetic bold (@minghuaw, @mrobinson, #39519, #39681, #39633, #39691, #39713), and now selects more appropriate fallback fonts for Kanji in Japanese text (@arayaryoma, #39608).

‘initial-scale’ now does the right thing in <meta name=viewport> (@atbrakhi, @shubhamg13, @mrobinson, #40055).

We’ve improved the way we handle ‘border-radius’ (@Loirooriol, #39571) and margin collapsing (@Loirooriol, #36322). While they’re fairly unassuming fixes on the surface, both of them allowed us to find interop issues in the big incumbent engines (@Loirooriol, #39540, #36321) and help improve web standards (@noamr, @Loirooriol, csswg-drafts#12961, csswg-drafts#12218).

In other words, Servo is good for the web, even if you’re not using it yet!

Embedding and ecosystem

Our HTML-compatible XPath implementation now lives in its own crate, and it’s no longer limited to the Servo DOM (@simonwuelker, #39546). We don’t have any specific plans to release this as a standalone library just yet, but please let us know if you have a use case that would benefit from this!

You can now take screenshots of webviews with WebView::take_screenshot (@mrobinson, @delan, #39583).

Historically Servo has struggled with situations causing 100% CPU usage or unnecessary work on every tick of the event loop, whenever a page is considered “active” or “animating” (#25305, #3406). We had since throttled animations (@mrobinson, #37169) and reflows (@mrobinson, @Loirooriol, #38431), but only to fixed rates of 120 Hz and 60 Hz respectively.

But starting this month, you can run Servo with vsync, thanks to the RefreshDriver trait (@coding-joedow, @mrobinson, #39072), which allows embedders to tell Servo when to start rendering each frame. The default driver continues to run at 120 Hz, but you can define and install your own with ServoBuilder::refresh_driver.

Breaking changes

Servo’s embedding API has had a few breaking changes:

We’ve improved page zoom in our webview API (@atbrakhi, @mrobinson, @shubhamg13, #39738), which includes some breaking changes:

  • WebView::set_zoom was renamed to set_page_zoom, and it now takes an absolute zoom value. This makes it idempotent, but it means if you want relative zoom, you’ll have to multiply the zoom values yourself.
  • Use the new WebView::page_zoom method to get the current zoom value.
  • WebView::reset_zoom was removed; use set_page_zoom(1.0) instead.

Some breaking changes were also needed to give embedders a more powerful way to share input events with webviews (@mrobinson, #39720). Often both your app and the pages in your webviews may be interested in knowing when users press a key. Servo handles these situations by asking the embedder for all potentially useful input events, then echoing some of them back:

  1. Embedder calls WebView::notify_input_event to tell Servo about an input event, then web content (and Servo) can handle the event.
  2. Servo calls WebViewDelegate::notify_keyboard_event to tell the embedder about keyboard events that were neither canceled by scripts nor handled by Servo itself. The event details is included in the arguments.

Embedders had no way of knowing when non-keyboard input events, or keyboard events that were canceled or handled by Servo, have completed all of their effects in Servo. This was good enough for servoshell’s overridable key bindings, but not for WebDriver, where commands like Perform Actions need to reliably wait for input events to be handled. To solve these problems, we’ve replaced notify_keyboard_event with notify_input_event_handled:

  1. Embedder calls WebView::notify_input_event to tell Servo about an input event, then web content (and Servo) can handle the event. This now returns an InputEventId, allowing embedders to remember input events that they still care about for step 2.
  2. Servo calls WebViewDelegate::notify_input_event_handled to tell the embedder about every input event, when Servo has finished handling it. The event details are not included in the arguments, but you can use the InputEventId to look up the details in the embedder.

Perf and stability

Servo now does zero unnecessary layout work when updating canvases and animated images, thanks to a new “UpdatedImageData” layout mode (@mrobinson, @mukilan, #38991).

We’ve fixed crashes when clicking on web content on Android (@mrobinson, #39771), and when running Servo on platforms where JIT is forbidden (@jschwe, @sagudev, #40071, #40130).

For developers

CI builds for pull requests should now take 70% less time, since they now run on self-hosted CI runners (@delan, #39900, #39915). Bencher builds for runtime benchmarking now run on our new dedicated servers, so our Speedometer and Dromaeo data should now be more accurate and less noisy (@delan, #39272).

We’ve now switched all of our macOS builds to run on arm64 (@sagudev, @jschwe, #38460, #39968). This helps back our macOS releases with thorough automated testing on the same architecture as our releases, but we can’t run them on self-hosted CI runners yet, so they may be slower for the time being.

Work is underway to set up faster macOS arm64 runners on our own servers (@delan, ci-runners#64), funded by your donations. Speaking of which!

Donations

Thanks again for your generous support! We are now receiving 5753 USD/month (+1.7% over September) in recurring donations.

This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo. Keep an eye out for further CI improvements in the coming months, including faster macOS arm64 builds and ten-minute WPT builds.

Servo is also on thanks.dev, and already 28 GitHub users (same as September) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

5753 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

The Mozilla BlogThe writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online

woman sitting in a library holding a large white chess knight piece.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Jacque Aye, the author behind “Diary of a Sad Black Woman.” She talks about blogging culture, writing fiction for “perpetually sighing adults” and Lily Allen’s new album.

What is an internet deep dive that you can’t wait to jump back into?

Right now, I’m deep diving into Lily Allen’s newest album! Not for the gossip, although there’s plenty of that to dive into, but for the psychology behind it all. I appreciate creatives who share so vulnerably but in nuanced and honest ways. Sharing experiences is what makes us feel human, I think. The way she outlined falling in love, losing herself, struggling with insecurities, and feeling numb was so relatable to me. Now, would I share as many details? Probably not. But I do feel her.

What was the first online community you engaged with?

Blogger. I was definitely a Blogger baby, and I used to share my thoughts and outfits there, the same way I currently share on Substack. I sometimes miss those times and my little oversharing community. Most people didn’t really have personal brands then, so everything felt more authentic, anonymous and free.

What is the one tab you always regret closing?

Substack! I always find the coolest articles, save the tab, then completely forget I meant to read it, ahhhh.

What can you not stop talking about on the internet right now?

I post about my books online to an obsessive and almost alarming degree, ha. I’ve been going on and on about my weird, whimsical, and woeful novels, and people seem to resonate with that. I describe my work as Lemony Snicket meets a Boots Riley movie, but for perpetually sighing adults. I also never, ever shut up about my feelings. You can even read my diary online. For free. On Substack.

If you could create your own corner of the internet, what would it look like?

I feel super lucky to have my own little corner of the internet! In my corner, we love wearing cute outfits, listening to sad girl music, watching Tim Burton movies, and reading about flawed women going through absurd trials.

What articles and/or videos are you waiting to read/watch right now?

I can’t wait to settle in and watch Knights of Guinevere! It looks so, so good, and I adore the creator.

What is your favorite corner of the internet?

This will seem so random, but right now, besides Substack, I’m really loving Threads. People are so vulnerable on there, and so willing to share personal stories and ask for help and advice. I love any space where I can express the full range of my feelings… and also share my books and outfits, ha.

How do you imagine the next version of the internet supporting creators who lead with emotion and care?

I really hope the next version of the internet reverts back to the days of Blogger and Tumblr. Where people could design their spaces how they see fit, integrate music and spew their hearts out without all the judgment.


Jacque Aye is an author and writes “Diary of a Sad Black Woman” on Substack. As a woman who suffers from depression and social anxiety, she’s made it her mission to candidly share her experiences with the hopes of helping others dealing with the same. This extends into her fiction work, where she pens tales about woeful women trying their best, with a surrealist, magical touch. Inspired by authors like Haruki Murakami, Sayaka Murata, and Lemony Snicket, Jacque’s stories are dark, magical, and humorous with a hint… well, a bunch… of absurdity.

The post The writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online appeared first on The Mozilla Blog.

The Mozilla BlogIntroducing AI, the Firefox way: A look at what we’re working on and how you can help shape it

Illustration of Firefox browser showing menu options for Current, AI, and Private windows with glowing effects.

We recently shared how we are approaching AI in Firefox — with user choice and openness as our guiding principles. That’s because we believe AI should be built like the internet —  open, accessible, and driven by choice — so that users and the developers helping to build it can use it as they wish, help shape it and truly benefit from it.

In Firefox, you’ll never be locked into one ecosystem or have AI forced into your browsing experience. You decide when, how or whether to use it at all. You’ve already seen this approach in action through some of our latest features like the AI chatbot in the sidebar for desktop or Shake to Summarize on iOS. 

Now, we’re excited to invite you to help shape the work on our next innovation: an AI Window. It’s a new, intelligent and user-controlled space we’re building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms. Completely opt-in, you have full control, and if you try it and find it’s not for you, you can choose to switch it off.

As always, we’re building in the open — and we want to build this with you. Starting today, you can sign up to receive updates on our AI Window and be among the first to try it and give us feedback. 

Firefox logo with orange fox wrapped around purple globe.

AI Window: Built for choice & control

Join the waitlist

We’re building a better browser, not an agenda

We see a lot of promise in AI browser features making your online experience smoother, more helpful, and free from the everyday disruptions that break your flow. But browsers made by AI companies ask you to make a hard choice — either use AI all the time or don’t use it at all.

We’re focused on making the best browser, which means recognizing that everyone has different needs. For some, AI is part of everyday life. For others, it’s useful only occasionally. And many are simply curious about what it can offer, but unsure where to start.

Regardless of your choice, with Firefox, you’re in control. 

You can continue using Firefox as you always have for the most customizable experience, or switch from classic to Private Window for the most private browsing experience. And now, with AI Window, you have the option to opt in to our most intelligent and personalized experience yet — providing you with new ways to interact with the web.

Why is investing in AI important for Firefox?

With AI becoming a more widely adopted interface to the web, the principles of transparency, accountability, and respect for user agency are critical to keeping it free, open, and accessible to all. As an independent browser, we are well positioned to uphold these principles.

While others are building AI experiences that keep you locked in a conversational loop, we see a different path — one where AI serves as a trusted companion, enhancing your browsing experience and guiding you outward to the broader web.

We believe standing still while technology moves forward doesn’t benefit the web or humanity. That’s why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less.

Help us shape the future of the web 

Our success has always been driven by our community of users and developers, and we’ll continue to rely on you as we explore how AI can serve the web — without ever losing focus on our commitment to build what matters most to our users: a Firefox that remains fast, secure and private. 

Join us by contributing to open-source projects and sharing your ideas on Mozilla Connect.

The post Introducing AI, the Firefox way: A look at what we’re working on and how you can help shape it appeared first on The Mozilla Blog.

Mozilla Privacy BlogBehind the Manifesto: The Survivors of the Open Web

Welcome to the blog series “Behind the Manifesto,” where we unpack core issues that are critical to Mozilla’s mission. The Mozilla Manifesto represents Mozilla’s commitment to advancing an open, global internet. This blog series digs deeper on our vision for the web and the people who use it, and how these goals are advanced in policymaking and technology. 

 

The internet wasn’t always a set of corporate apps and walled gardens. In its early days, it was a place of experimentation — a digital commons where anyone could publish, connect, and build without asking permission. That openness depended on invisible layers of technology that allowed the web to function as a true public space. Layers such as browser engines, open standards, and shared protocols are the scaffolding that made the internet free, creative, and interoperable.

In 2013, there were five major browser engines. Now, only three remain: Apple’s WebKit, Google’s Blink, and Mozilla’s Gecko (which powers Firefox). In a world of giants, Gecko fights not for dominance, but for an internet that is open and accessible to all.

In an era of consolidation, a thriving and competitive browser engine ecosystem is critical. But sadly, browser engines are subject to the same trends towards concentration. As we’ve lost competitors, we lose more than a piece of code. We lose choice, perspectives, and ideas about how the web works.

So, how do we drive competition in browser engines and more widely across the web? How do we promote policies that protect people and encourage meaningful choice? How do we contend with AI as both a disruptor and an impetus for innovation? Can competition interventions protect the open web? What’s the impact of landmark antitrust cases for consumers and the future technology landscape?

These aren’t new questions for Mozilla. They’re the same questions that have shaped our mission for more than 20 years, and the ones we continue to ask today. Our recent Mozilla Meetup in Washington D.C., a panel-style event and happy hour, brought these debates to the forefront.

On October 8th, we convened leading minds in tech policy to explore the future of competition and its role in saving the open web. Before a standing-room-only audience, the panelists discussed browser competition, leading antitrust legislation, landmark cases currently under review, and AI’s impact. Their insights underscored a critical point: the same questions about access, agency and choice that defined parts of the early internet are just as pressing in today’s digital ecosystem, shaping our continued pursuit of an open and diverse web. Below are a few takeaways.

On today’s competition landscape:

Luke Hogg, Director, Technology Policy, Foundation for American Innovation:

“Antitrust is back. One of the emerging lessons of the last year in antitrust cases and competition policy is that with these big questions being answered, the results do tend to be bipartisan. Antitrust is a cross-partisan issue.”

On the United States v. Google LLC search case: 

Kush Amlani, Director, Global Competition & Regulation, Mozilla:

“One of our key concerns was ensuring that search competition didn’t come at the expense of browser competition. And the payments to independent browsers were not banned, and that was obviously granted by the judge…What’s next is really how the remedies are implemented, and how effective they are. And the devil is going to be in the detail, in terms of how useful is this data? How much can third parties benefit from syndicating search results?” 

Alissa Cooper, Executive Director, Knight-Georgetown Institute:

“The search case is set up as being pro-divestiture or anti-divestiture, but it’s really about what is going to work. Divestiture aligns with what was requested. If you leave Chrome under Google, you have to build in surveillance and monitoring in the market to make sure their behavior aligns. If you divest, it becomes independent and can operate on its own without the need for monitoring. In the end, do you think that would be an effective remedy to open the market to reentry? Or do you think there is another option?”

On the impact of AI: 

Amba Kak, Co-Executive Director, AI Now Institute:

“AI has upended the market and changed technology, but it’s also true Big Tech, in many ways, has been training for this very disruption for the last ten years. 

In the early 2010s, key resources — data, compute, talent — were already concentrated within a few players due to regulatory inaction. It’s important to understand that this trajectory of AI aligning with the incentives of Big Tech isn’t an accident, it’s by design.”

On the timing of this fight for the open web:

Alissa Cooper, Executive Director, Knight-Georgetown Institute:

“The difference now [as opposed to previous fights for the web] is that we have a lot of experience. We know what the open world and open web look like. In some ways, this is an advantage. The difference now is the unbelievable amount of corporate power involved. There needs to be a field where new businesses can enter. Without it, we are fighting the last war.”

 

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla’s policy priorities.

 

The post Behind the Manifesto: The Survivors of the Open Web appeared first on Open Policy & Advocacy.

The Mozilla BlogMozilla joins the Digital Public Goods Alliance, championing open source to drive global progress

Today, Mozilla is thrilled to join the Digital Public Goods Alliance (DPGA) as its newest member. The DPGA is a UN-backed initiative that seeks to advance open technologies and ensure that technology is put to use in the public interest and serves everyone, everywhere — like Mozilla’s Common Voice, which has been recognized as a Digital Public Good (DPG). This announcement comes on the heels of a big year of digital policy-making globally, where Mozilla has been at the forefront in advocating for open source AI across Europe, North America and the UK. 

The DPGA is a multi-stakeholder initiative with a mission to accelerate the attainment of the Sustainable Development Goals (SDGs) “by facilitating the discovery, development, use of and investment in digital public goods.” Digital public goods means open-source technology, open data, open and transparent AI models, open standards and open content that adhere to privacy, the do no harm principle, and other best practices. 

This is deeply aligned with Mozilla’s mission. It creates a natural opportunity for collaboration and shared advocacy in the open ecosystem, with allies and like-minded builders from across the globe. As part of the DPGA’s Annual Roadmap for 2025, Mozilla will focus on three work streams: 

  1. Promoting DPGs in the Open Source Ecosystem: Mozilla has long championed open-source, public-interest technology as an alternative to profit-driven development. Through global advocacy, policy engagement, and research, we highlight the societal and economic value of open-source, especially in AI. Through our work in the DPGA,, we’ll continue pushing for better enabling conditions and funding opportunities for open source, public interest technology. 
  2. DPGs and Digital Commons: Mozilla develops and maintains a range of open source projects through our various entities. These include Common Voice, a digital public good with over 33,000 hours of multilingual voice data, and applications like the Firefox web browser and Thunderbird email client. Mozilla also supports open-source AI through our product work, including by Mozilla.ai, and our venture fund, Mozilla Ventures
  3. Funding Open Source & Public Interest Technology: Grounded by our own open source roots, Mozilla will continue to fund open source technologies that help to untangle thorny sociotechnical issues. We’ve fueled a broad and impactful portfolio of technical projects. Beginning in the Fall of 2025, we will introduce our latest grantmaking program: an incubator that will help community-driven projects find “product-community fit” in order to attain long-term sustainability.

We hope to use our membership to share research, tooling, and perspectives with a like-minded audience and partner with the DPGA’s diverse community of builders and allies. 

“Open source AI and open data aren’t just about tech,” said Mark Surman, president of Mozilla. “They’re about access to technology and progress for people everywhere. As a double bottom line, mission-driven enterprise, Mozilla is proud to be part of the DPGA and excited to work toward our joint mission of advancing open-source, trustworthy technology that puts people first.” 

To learn more about DPGA, visit https://digitalpublicgoods.net

The post Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress  appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 625

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is automesh, a crate for high-performance automatic mesh generation in Rust.

Thanks to Michael R. Buche for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

409 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly quiet week, with the majority of changes coming from the standard library work towards removal of Copy specialization (#135634).

Triage done by @simulacrum. Revision range: 35ebdf9b..055d0d6a

3 Regressions, 1 Improvement, 7 Mixed; 3 of them in rollups 37 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-11-12 - 2025-12-10 🦀

Virtual
Africa
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Making your unsafe very tiny is sort of like putting caution markings on the lethally strong robot arm with no proximity sensors, rather than on the door into the protective cage.

Stephan Sokolow on lobste.rs

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: * nellshamrell * llogiq * ericseppanen * extrawurst * U007D * mariannegoldin * bdillo * opeolluwa * bnchi * KannanPalani57 * tzilist

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Developer ExperienceFirefox WebDriver Newsletter 145

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 145 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 145, a new contributor landed two patches in our codebase. Thanks to Khalid AlHaddad for the following fixes:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

Niko MatsakisJust call clone (or alias)

Continuing my series on ergonomic ref-counting, I want to explore another idea, one that I’m calling “just call clone (or alias)”. This proposal specializes the clone and alias methods so that, in a new edition, the compiler will (1) remove redundant or unnecessary calls (with a lint); and (2) automatically capture clones or aliases in move closures where needed.

The goal of this proposal is to simplify the user’s mental model: whenever you see an error like “use of moved value”, the fix is always the same: just call clone (or alias, if applicable). This model is aiming for the balance of “low-level enough for a Kernel, usable enough for a GUI” that I described earlier. It’s also making a statement, which is that the key property we want to preserve is that you can always find where new aliases might be created – but that it’s ok if the fine-grained details around exactly when the alias is created is a bit subtle.

The proposal in a nutshell

Part 1: Closure desugaring that is aware of clones and aliases

Consider this move future:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(async move {
        //                   ---- move future
        manage_io(cx.io_system.alias(), cx.request_name.clone());
        //        --------------------  -----------------------
    });
    ...
}

Because this is a move future, this takes ownership of cx.io_system and cx_request_name. Because cx is a borrowed reference, this will be an error unless those values are Copy (which they presumably are not). Under this proposal, capturing aliases or clones in a move closure/future would result in capturing an alias or clone of the place. So this future would be desugared like so (using explicit capture clause strawman notation):

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            //     --------------------  -----------------------
            //     capture alias/clone respectively

            manage_io(cx.io_system.alias(), cx.request_name.clone());
        }
    );
    ...
}

Part 2: Last-use transformation

Now, this result is inefficient – there are now two aliases/clones. So the next part of the proposal is that the compiler would, in newer Rust editions, apply a new transformat called the last-use transformation. This transformation would identify calls to alias or clone that are not needed to satisfy the borrow checker and remove them. This code would therefore become:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            manage_io(cx.io_system, cx.request_name);
            //        ------------  ---------------
            //        converted to moves
        }
    );
    ...
}

The last-use transformation would apply beyond closures. Given an example like this one, which clones id even though id is never used later:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id.clone());
    //                                       ----------
    //                                       unnecessary
    send_request(request)
}

the user would get a warning like so1:

warning: unnecessary `clone` call will be converted to a move
 --> src/main.rs:7:40
  |
8 |     let request = Request::ProcessIdentifier(id.clone());
  |                                              ^^^^^^^^^^ unnecessary call to `clone`
  |
  = help: the compiler automatically removes calls to `clone` and `alias` when not
    required to satisfy the borrow checker
help: change `id.clone()` to `id` for greater clarity
  |
8 -     let request = Request::ProcessIdentifier(id.clone());
8 +     let request = Request::ProcessIdentifier(id);
  |

and the code would be transformed so that it simply does a move:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id);
    //                                       --
    //                                   transformed
    send_request(request)
}

Mental model: just call “clone” (or “alias”)

The goal of this proposal is that, when you get an error about a use of moved value, or moving borrowed content, the fix is always the same: you just call clone (or alias). It doesn’t matter whether that error occurs in the regular function body or in a closure or in a future, the compiler will insert the clones/aliases needed to ensure future users of that same place have access to it (and no more than that).

I believe this will be helpful for new users. Early in their Rust journey new users are often sprinkling calls to clone as well as sigils like & in more-or-less at random as they try to develop a firm mental model – this is where the “keep calm and call clone” joke comes from. This approach breaks down around closures and futures today. Under this proposal, it will work, but users will also benefit from warnings indicating unnecessary clones, which I think will help them to understand where clone is really needed.

Experienced users can trust the compiler to get it right

But the real question is how this works for experienced users. I’ve been thinking about this a lot! I think this approach fits pretty squarely in the classic Bjarne Stroustrup definition of a zero-cost abstraction:

“What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.”

The first half is clearly satisfied. If you don’t call clone or alias, this proposal has no impact on your life.

The key point is the second half: earlier versions of this proposal were more simplistic, and would sometimes result in redundant or unnecessary clones and aliases. Upon reflection, I decided that this was a non-starter. The only way this proposal works is if experienced users know there is no performance advantage to using the more explicit form.This is precisely what we have with, say, iterators, and I think it works out very well. I believe this proposal hits that mark, but I’d like to hear if there are things I’m overlooking.

The last-use transformation codifies a widespread intuition, that clone is never necessary

I think most users would expect that changing message.clone() to just message is fine, as long as the code keeps compiling. But in fact nothing requires that to be the case. Under this proposal, APIs that make clone significant in unusual ways would be more annoying to use in the new Rust edition and I expect ultimately wind up getting changed so that “significant clones” have another name. I think this is a good thing.

Frequently asked questions

I think I’ve covered the key points. Let me dive into some of the details here with a FAQ.

Can you summarize all of these posts you’ve been writing? It’s a lot to digest!

I get it, I’ve been throwing a lot of things out there. Let me begin by recapping the motivation as I see it:

  • I believe our goal should be to focus first on a design that is “low-level enough for a Kernel, usable enough for a GUI”.
    • The key part here is the word enough. We need to make sure that low-level details are exposed, but only those that truly matter. And we need to make sure that it’s ergonomic to use, but it doesn’t have to be as nice as TypeScript (though that would be great).
  • Rust’s current approach to Clone fails both groups of users;
    • calls to clone are not explicit enough for kernels and low-level software: when you see something.clone(), you don’t know that is creating a new alias or an entirely distinct value, and you don’t have any clue what it will cost at runtime. There’s a reason much of the community recommends writing Arc::clone(&something) instead.
    • calls to clone, particularly in closures, are a major ergonomic pain point, this has been a clear consensus since we first started talking about this issue.

I then proposed a set of three changes to address these issues, authored in individual blog posts:

  • First, we introduce the Alias trait (originally called Handle). The Alias trait introduces a new method alias that is equivalent to clone but indicates that this will be creating a second alias of the same underlying value.
  • Second, we introduce explicit capture clauses, which lighten the syntactic load of capturing a clone or alias, make it possible to declare up-front the full set of values captured by a closure/future, and will support other kinds of handy transformations (e.g., capturing the result of as_ref or to_string).
  • Finally, we introduce the just call clone proposal described in this post. This modifies closure desugaring to recognize clones/aliases and also applies the last-use transformation to replace calls to clone/alias with moves where possible.

What would it feel like if we did all those things?

Let’s look at the impact of each set of changes by walking through the “Cloudflare example”, which originated in this excellent blog post by the Dioxus folks:

let some_value = Arc::new(something);

// task 1
let _some_value = some_value.clone();
tokio::task::spawn(async move {
    do_something_with(_some_value);
});

// task 2:  listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
  	do_something_else_with(_some_a, _some_b, _some_c)
});

As the original blog post put it:

Working on this codebase was demoralizing. We could think of no better way to architect things - we needed listeners for basically everything that filtered their updates based on the state of the app. You could say “lol get gud,” but the engineers on this team were the sharpest people I’ve ever worked with. Cloudflare is all-in on Rust. They’re willing to throw money at codebases like this. Nuclear fusion won’t be solved with Rust if this is how sharing state works.

Applying the Alias trait and explicit capture clauses makes for a modest improvement. You can now clearly see that the calls to clone are alias calls, and you don’t have the awkward _some_value and _some_a variables. However, the code is still pretty verbose:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value);
});

// task 2:  listen for dns connections
tokio::task::spawn(async move(
    self.some_a.alias(),
    self.some_b.alias(),
    self.some_c.alias(),
) {
  	do_something_else_with(self.some_a, self.some_b, self.some_c)
});

Applying the Just Call Clone proposal removes a lot of boilerplate and, I think, captures the intent of the code very well. It also retains quite a bit of explicitness, in that searching for calls to alias reveals all the places that aliases will be created. However, it does introduce a bit of subtlety, since (e.g.) the call to self.some_a.alias() will actually occur when the future is created and not when it is awaited:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

// task 2:  listen for dns connections
tokio::task::spawn(async move {
  	do_something_else_with(
        self.some_a.alias(),
        self.some_b.alias(),
        self.some_c.alias(),
    )
});

I’m worried that the execution order of calls to alias will be too subtle. How is thie “explicit enough for low-level code”?

There is no question that Just Call Clone makes closure/future desugaring more subtle. Looking at task 1:

tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

this gets desugared to a call to alias when the future is created (not when it is awaited). Using the explicit form:

tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value)
});

I can definitely imagine people getting confused at first – “but that call to alias looks like its inside the future (or closure), how come it’s occuring earlier?”

Yet, the code really seems to preserve what is most important: when I search the codebase for calls to alias, I will find that an alias is creating for this task. And for the vast majority of real-world examples, the distinction of whether an alias is creating when the task is spawned versus when it executes doesn’t matter. Look at this code: the important thing is that do_something_with is called with an alias of some_value, so some_value will stay alive as long as do_something_else is executing. It doesn’t really matter how the “plumbing” worked.

What about futures that conditionally alias a value?

Yeah, good point, those kind of examples have more room for confusion. Like look at this:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value.alias());
    }
});

In this example, there is code that uses some_value with an alias, but only under if false. So what happens? I would assume that indeed the future will capture an alias of some_value, in just the same way that this future will move some_value, even though the relevant code is dead:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value);
    }
});

Can you give more details about the closure desugaring you imagine?

Yep! I am thinking of something like this:

  • If there is an explicit capture clause, use that.
  • Else:
    • For non-move closures/futures, no changes, so
      • Categorize usage of each place and pick the “weakest option” that is available:
        • by ref
        • by mut ref
        • moves
    • For move closures/futures, we would change
      • Categorize usage of each place P and decide whether to capture that place…
        • by clone, there is at least one call P.clone() or P.alias() and all other usage of P requires only a shared ref (reads)
        • by move, if there are no calls to P.clone() or P.alias() or if there are usages of P that require ownership or a mutable reference
      • Capture by clone/alias when a place a.b.c is only used via shared references, and at least one of those is a clone or alias.
        • For the purposes of this, accessing a “prefix place” a or a “suffix place” a.b.c.d is also considered an access to a.b.c.

Examples that show some edge cased:

if consume {
    x.foo().
}

Why not do something similar for non-move closures?

In the relevant cases, non-move closures will already just capture by shared reference. This means that later attempts to use that variable will generally succeed:

let f = async {
    //  ----- NOT async move
    self.some_a.alias()
};

do_something_else(self.some_a.alias());
//                ----------- later use succeeds

f.await;

This future does not need to take ownership of self.some_a to create an alias, so it will just capture a reference to self.some_a. That means that later uses of self.some_a can still compile, no problem. If this had been a move closure, however, that code above would currently not compile.

There is an edge case where you might get an error, which is when you are moving:

let f = async {
    self.some_a.alias()
};

do_something_else(self.some_a);
//                ----------- move!

f.await;

In that case, you can make this an async move closure and/or use an explicit capture clause:

Can you give more details about the last-use transformation you imagine?

Yep! We would during codegen identify candidate calls to Clone::clone or Alias::alias. After borrow check has executed, we would examine each of the callsites and check the borrow check information to decide:

  • Will this place be accessed later?
  • Will some reference potentially referencing this place be accessed later?

If the answer to both questions is no, then we will replace the call with a move of the original place.

Here are some examples:

fn borrow(message: Message) -> String {
    let method = message.method.to_string();

    send_message(message.clone());
    //           ---------------
    //           would be transformed to
    //           just `message`

    method
}
fn borrow(message: Message) -> String {
    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `message.method` is
    //           referenced later

    message.method.to_string()
}
fn borrow(message: Message) -> String {
    let r = &message;

    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `r` may reference
    //           `message` and is used later.

    r.method.to_string()
}

Why are you calling it the last-use transformation and not optimization?

In the past, I’ve talked about the last-use transformation as an optimization – but I’m changing terminology here. This is because, typically, an optimization is supposed to be unobservable to users except through measurements of execution time (or though UB), and that is clearly not the case here. The transformation would be a mechanical transformation performed by the compiler in a deterministic fashion.

Would the transformation “see through” references?

I think yes, but in a limited way. In other words I would expect

Clone::clone(&foo)

and

let p = &foo;
Clone::clone(p)

to be transformed in the same way (replaced with foo), and the same would apply to more levels of intermediate usage. This would kind of “fall out” from the MIR-based optimization technique I imagine. It doesn’t have to be this way, we could be more particular about the syntax that people wrote, but I think that would be surprising.

On the other hand, you could still fool it e.g. like so

fn identity<T>(x: &T) -> &T { x }

identity(&foo).clone()

Would the transformation apply across function boundaries?

The way I imagine it, no. The transformation would be local to a function body. This means that one could write a force_clone method like so that “hides” the clone in a way that it will never be transformed away (this is an important capability for edition transformations!):

fn pipe<Msg: Clone>(message: Msg) -> Msg {
    log(message.clone()); // <-- keep this one
    force_clone(&message)
}

fn force_clone<Msg: Clone>(message: &Msg) -> Msg {
    // Here, the input is `&Msg`, so the clone is necessary
    // to produce a `Msg`.
    message.clone()
}

Won’t the last-use transformation change behavior by making destructors run earlier?

Potentially, yes! Consider this example, written using explicit capture clause notation and written assuming we add an Alias trait:

async fn process_and_stuff(tx: mpsc::Sender<Message>) {
    tokio::spawn({
        async move(tx.alias()) {
            //     ---------- alias here
            process(tx).await
        }
    });

    do_something_unrelated().await;
}

The precise timing when Sender values are dropped can be important – when all senders have dropped, the Receiver will start returning None when you call recv. Before that, it will block waiting for more messages, since those tx handles could still be used.

So, in process_and_stuff, when will the sender aliases be fully dropped? The answer depends on whether we do the last-use transformation or not:

  • Without the transformation, there are two aliases: the original tx and the one being held by the future. So the receiver will only start returning None when do_something_unrelated has finished and the task has completed.
  • With the transformation, the call to tx.alias() is removed, and so there is only one alias – tx, which is moved into the future, and dropped once the spawned task completes. This could well be earlier than in the previous code, which had to wait until both process_and_stuff and the new task completed.

Most of the time, running destructors earlier is a good thing. That means lower peak memory usage, faster responsiveness. But in extreme cases it could lead to bugs – a typical example is a Mutex<()> where the guard is being used to protect some external resource.

How can we change when code runs? Doesn’t that break stability?

This is what editions are for! We have in fact done a very similar transformation before, in Rust 2021. RFC 2229 changed destructor timing around closures and it was, by and large, a non-event.

The desire for edition compatibility is in fact one of the reasons I want to make this a last-use transformation and not some kind of optimization. There is no UB in any of these examples, it’s just that to understand what Rust code does around clones/aliases is a bit more complex than it used to be, because the compiler will do automatic transformation to those calls. The fact that this transformation is local to a function means we can decide on a call-by-call basis whether it should follow the older edition rules (where it will always occur) or the newer rules (where it may be transformed into a move).

Does that mean that the last-use transformation would change with Polonius or other borrow checker improvements?

In theory, yes, improvements to borrow-checker precision like Polonius could mean that we identify more opportunities to apply the last-use transformation. This is something we can phase in over an edition. It’s a bit of a pain, but I think we can live with it – and I’m unconvinced it will be important in practice. For example, when thinking about the improvements I expect under Polonius, I was not able to come up with a realistic example that would be impacted.

Isn’t it weird to do this after borrow check?

This last-use transformation is guaranteed not to produce code that would fail the borrow check. However, it can affect the correctness of unsafe code:

let p: *const T = &*some_place;

let q: T = some_place.clone();
//         ---------- assuming `some_place` is
//         not used later, becomes a move

unsafe {
    do_something(p);
    //           -
    // This now refers to a stack slot
    // whose value is uninitialized.
}

Note though that, in this case, there would be a lint identifying that the call to some_place.clone() will be transformed to just some_place. We could also detect simple examples like this one and report a stronger deny-by-default lint, as we often do when we see guaranteed UB.

Shouldn’t we use a keyword for this?

When I originally had this idea, I called it “use-use-everywhere” and, instead of writing x.clone() or x.alias(), I imagined writing x.use. This made sense to me because a keyword seemed like a stronger signal that this was impacting closure desugaring. However, I’ve changed my mind for a few reasons.

First, Santiago Pastorino gave strong pushback that x.use was going to be a stumbling block for new learners. They now have to see this keyword and try to understand what it means – in contrast, if they see method calls, they will likely not even notice something strange is going on.

The second reason though was TC who argued, in the lang-team meeting, that all the arguments for why it should be ergonomic to clone a ref-counted value in a closure applied equally well to clone, depending on the needs of your application. I completely agree. As I mentioned earlier, this also [addresses the concern I’ve heard with the Alias trait], which is that there are things you want to ergonomically clone but which don’t correspond to “aliases”. True.

In general I think that clone (and alias) are fundamental enough to how Rust is used that it’s ok to special case them. Perhaps we’ll identify other similar methods in the future, or generalize this mechanism, but for now I think we can focus on these two cases.

What about “deferred ref-counting”?

One point that I’ve raised from time-to-time is that I would like a solution that gives the compiler more room to optimize ref-counting to avoid incrementing ref-counts in cases where it is obvious that those ref-counts are not needed. An example might be a function like this:

fn use_data(rc: Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

This function requires ownership of an alias to a ref-counted value but it doesn’t actually do anything but read from it. A caller like this one…

use_data(source.alias())

…doesn’t really need to increment the reference count, since the caller will be holding a reference the entire time. I often write code like this using a &:

fn use_data(rc: &Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

so that the caller can do use_data(&source) – this then allows the callee to write rc.alias() in the case that it wants to take ownership.

I’ve basically decided to punt on adressing this problem. I think folks that are very performance sensitive can use &Arc and the rest of us can sometimes have an extra ref-count increment, but either way, the semantics for users are clear enough and (frankly) good enough.


  1. Surprisingly to me, clippy::pedantic doesn’t have a dedicated lint for unnecessary clones. This particular example does get a lint, but it’s a lint about taking an argument by value and then not consuming it. If you rewrite the example to create id locally, clippy does not complain↩︎

The Mozilla BlogFirefox expands fingerprint protections: advancing towards a more private web

With Firefox 145, we’re rolling out major privacy upgrades that take on browser fingerprinting — a pervasive and hidden tracking technique that lets websites identify you even when cookies are blocked or you’re in private browsing. These protections build on Mozilla’s long-term goal of building a healthier, transparent and privacy-preserving web ecosystem.

Fingerprinting builds a secret digital ID of you by collecting subtle details of your setup — ranging from your time zone to your operating system settings — that together create a “fingerprint” identifiable across websites and across browser sessions. Having a unique fingerprint means fingerprinters can continuously identify you invisibly, allowing bad actors to track you without your knowledge or consent. Online fingerprinting is able to track you for months, even when you use any browser’s private browsing mode.

Protecting people’s privacy has always been core to Firefox. Since 2020, Firefox’s built-in Enhanced Tracking Protection (ETP) has blocked known trackers and other invasive practices, while features like Total Cookie Protection and now expanded fingerprinting defenses demonstrate a broader goal: prioritizing your online freedom through innovative privacy-by-design. Since 2021, Firefox has been incrementally enhancing anti-fingerprinting protections targeting the most common pieces of information collected for suspected fingerprinting uses.

Today, we are excited to announce the completion of the second phase of defenses against fingerprinters that linger across all your browsing but aren’t in the known tracker lists. With these fingerprinting protections, the amount of Firefox users trackable by fingerprinters is reduced by half.

How we built stronger defenses

Drawing from a global analysis of how real people’s browsers can be fingerprinted, Mozilla has developed new, unique and powerful defenses against real-world fingerprinting techniques. Firefox is the first browser with this level of insight into fingerprinting and the most effective deployed defenses to reduce it. Like Total Cookie Protection, one of our most innovative privacy features, these new defenses are debuting in Private Browsing Mode and ETP Strict mode initially, while we work to enable them by default.

How Firefox protects you

These fingerprinting protections work on multiple layers, building on Firefox’s already robust privacy features. For example, Firefox has long blocked known tracking and fingerprinting scripts as part of its Enhanced Tracking Protection

Beyond blocking trackers, Firefox also limits the information it makes available to websites — a privacy-by-design approach — that preemptively shrinks your fingerprint. Browsers provide a way for websites to ask for information that enables legitimate website features, e.g. your graphics hardware information, which allows sites to optimize games for your computer.  But trackers can also ask for that information, for no other reason than to help build a fingerprint of your browser and track you across the web.  

Since 2021, Firefox has been incrementally advancing fingerprinting protections, covering the most pervasive fingerprinting techniques. These include things like how your graphics card draws images, which fonts your computer has, and even tiny differences in how it performs math. The first phase plugged the biggest and most-common leaks of fingerprinting information.

Recent Firefox releases have tackled the next-largest leaks of user information used by online fingerprinters. This ranges from strengthening the font protections to preventing websites from getting to know your hardware details like the number of cores your processor has, the number of simultaneous fingers your touchscreen supports, and the dimensions of your dock or taskbar. The full list of detailed protections is available in our documentation.

Our research shows these improvements cut the percentage of users seen as unique by almost half.

Firefox’s new protections are a balance of disrupting fingerprinters while maintaining web usability. More aggressive fingerprinting blocking might sound better, but is guaranteed to break legitimate website features. For instance, calendar, scheduling, and conferencing tools legitimately need your real time zone. Firefox’s approach is to target the most leaky fingerprinting vectors (the tricks and scripts used by trackers) while preserving functionality many sites need to work normally. The end result is a set of layered defenses that significantly reduce tracking without downgrading your browsing experience. More details are available about both the specific behaviors and how to recognize a problem on a site and disable protections for that site alone, so you always stay in control. The goal: strong privacy protections that don’t get in your way.

What’s next for your privacy

If you open a Private Browsing window or use ETP Strict mode, Firefox is already working behind the scenes to make you harder to track. The latest phase of Firefox’s fingerprinting protections marks an important milestone in our mission to deliver: smart privacy protections that work automatically — no further extensions or configurations needed. As we head into the future, Firefox remains committed to fighting for your privacy, so you get to enjoy the web on your terms. Upgrade to the latest Firefox and take back control of your privacy.

Take control of your internet

Download Firefox

The post Firefox expands fingerprint protections: advancing towards a more private web appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.91.1

The Rust team has published a new point release of Rust, 1.91.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.91.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.91.1

Rust 1.91.1 includes fixes for two regressions introduced in the 1.91.0 release.

Linker and runtime errors on Wasm

Most targets supported by Rust identify symbols by their name, but Wasm identifies them with a symbol name and a Wasm module name. The #[link(wasm_import_module)] attribute allows to customize the Wasm module name an extern block refers to:

#[link(wasm_import_module = "hello")]
extern "C" {
    pub fn world();
}

Rust 1.91.0 introduced a regression in the attribute, which could cause linker failures during compilation ("import module mismatch" errors) or the wrong function being used at runtime (leading to undefined behavior, including crashes and silent data corruption). This happened when the same symbol name was imported from two different Wasm modules across multiple Rust crates.

Rust 1.91.1 fixes the regression. More details are available in issue #148347.

Cargo target directory locking broken on illumos

Cargo relies on locking the target/ directory during a build to prevent concurrent invocations of Cargo from interfering with each other. Not all filesystems support locking (most notably some networked ones): if the OS returns the Unsupported error when attempting to lock, Cargo assumes locking is not supported and proceeds without it.

Cargo 1.91.0 switched from custom code interacting with the OS APIs to the File::lock standard library method (recently stabilized in Rust 1.89.0). Due to an oversight, that method always returned Unsupported on the illumos target, causing Cargo to never lock the build directory on illumos regardless of whether the filesystem supported it.

Rust 1.91.1 fixes the oversight in the standard library by enabling the File::lock family of functions on illumos, indirectly fixing the Cargo regression.

Contributors to 1.91.1

Many people came together to create Rust 1.91.1. We couldn't have done it without all of you. Thanks!

The Mozilla BlogIntroducing early access for Firefox Support for Organizations

Multiple Firefox logos forming a curved trail on a dark background.

Increasingly, businesses, schools, and government institutions deploy Firefox at scale for security, resilience, and data sovereignty. Organizations have fine-grained administrative and orchestration control of the browser’s behavior using policies with Firefox and the Extended Support Release (ESR). Today, we’re opening early access to Firefox Support for Organizations, a new program that begins operation in January 2026.

What Firefox Support for Organizations offers

Support for Organizations is a dedicated offering for teams who need private issue triage and escalation, defined response times, custom development options, and close collaboration with Mozilla’s engineering and product teams.

  • Private support channel: Access a dedicated support system where you can open private help tickets directly with expert support engineers. Issues are triaged by severity level, with defined response times and clear escalation paths to ensure timely resolution.
  • Discounts on custom development: Paid support customers get discounts on custom development work for integration projects, compatibility testing, or environment-specific needs. With custom development as a paid add-on to support plans, Firefox can adapt with your infrastructure and third-party updates.
  • Strategic collaboration: Gain early insight into upcoming development and help shape the Firefox Enterprise roadmap through direct collaboration with Mozilla’s team.

Support for Organizations adds a new layer of help for teams and businesses that need confidential, reliable, and customized levels of support. All Firefox users will continue to have full access to existing public resources including documentation, the knowledge base, and community forums, and we’ll keep improving those for everyone in future. Support plans will help us better serve users who rely on Firefox for business-critical and sensitive operations.

Get in touch for early access

If these levels of support are interesting for your organization, get in touch using our inquiry form and we’ll get back to you with more information.

Multiple Firefox logos forming a curved trail on a dark background.

Firefox Support for Organizations

Get early access

The post Introducing early access for Firefox Support for Organizations appeared first on The Mozilla Blog.

The Mozilla BlogUnder the hood: How Firefox suggests tab groups with local AI

Browser popup showing the “Create tab group” menu with color options and AI tab suggestions button.

Background

Mozilla launched Tab Grouping in early 2025, allowing tabs to be arranged and grouped with persistent labels. It was the most requested feature in the history of Mozilla Connect. While tab grouping provides a great way to manage tabs and reduce tab overload, it can be a challenge to locate which tabs to group when you have many open.

We sought to improve the workflows by providing an AI tab grouping feature that enables two key capabilities:

  • Suggesting a title for a tab group when it is created by the user.
  • Suggesting tabs from the current window to be added to a tab group.

Of course, we wanted this to work without you needing to send any data of yours to Mozilla, so we used our local Firefox AI runtime and built an efficient model that delivers the features entirely on your own device. The feature is opt-in and downloads two small ML models when the user clicks to run it the first time.

Group title suggestion

Understanding the problem

Suggesting titles for grouped tabs is a challenge because it is hard to understand user intent when tabs are first grouped. Based on our interviews when we started the project, we found that while tab groups are sometimes generic terms like ‘Shopping’ or ‘Travel’, over half the time users’ tabs were specific terms such as name of a video game, friend or town. We also found tab names to be extremely short – 1 or 2 words.

Diagram showing Firefox tab information processed by a generative AI model to label topics like Boston Travel

Generating a digest of the group

To address these challenges, we adopt a hybrid methodology that combines a modified TF-IDF–based textual analysis with keyword extraction. We identify terms that are statistically distinctive to the titles of pages within a tab group compared to those outside it. The three most prominent keywords, along with the full titles of three randomly selected pages, are then combined to produce a concise digest representing the group, which is used as input for the subsequent stage of processing using a language model.

Generating the label

The digest string is used as an input to a generative model that returns the final label. We used a T5 based encoder-decoder model (flan-t5-base) that was fine tuned on over 10,000 example situations and labels.  

One of the key challenges in developing the model was generating the training data samples to tune the model without any user data. To do this, we defined a set of user archetypes and used an LLM API (OpenAI GPT-4) to create sample pages for a user performing various tasks. This was augmented by real page titles from the publicly available common crawl dataset. We then used the LLM to suggest short titles for those use cases. The process was first done at a small scale of several hundred group names. These were manually corrected and curated, adjusting for brevity and consistency. As the process scaled up, the initial 300 group names were used as examples passed to the LLM so that the additional examples created would meet those standards.  

Shrinking things down

We need to get the model small enough to run on most computers. Once the initial model was trained, it was sampled to a smaller model using a process known as knowledge distillation. For distillation, we tuned a t5-efficient-tiny model from the token probability outputs of our teacher flan-t5-base model.  Midway through the distillation process we also removed two encoder transformer layers and two decoder layers to further reduce the number of parameters.

Finally, the model parameters were quantized from floating point (4 bytes per parameter) to integer 8 bit. In the end this entire reduction process reduced the model from 1GB to 57 MB, with only a modest reduction in accuracy. 

Suggesting tabs 

Understanding the problem

For tab suggestions, we identified a couple of approaches on how people prefer grouping their tabs. Some people prefer grouping by domain to easily access all documents for work for instance. Others might prefer grouping all their tabs together when they are planning a trip. Others still might prefer separating their “work” and “personal” tabs.

Our initial approach on suggesting tabs was based on semantic similarity. Tabs that are topically similar are suggested.

Browser pop-up suggesting related tabs for a Boston trip using AI-based grouping

Identifying topically similar tabs

We first convert tab titles to a feature vector locally using a MiniLM embedding model. Embedding models are trained so that similar content produces vectors that are close together in embedding space. Using a similarity measure such as cosine similarity, we’re able to assign how closely similar a tab title or url is to another.

The similarity score between an anchor tab chosen by the user and another tab is a linear combination of the candidate tab with the group title (if present) of the anchor tab, the anchor tab title and the anchor url. Using these values, we generate a similarity probability and tabs that have a high probability threshold are suggested to be part of the group.

Mathematical formula showing conditional probability using weighted similarity and sigmoid function

where,
w is the weight,
t_i is the candidate tab,
t_a is the anchor tab,
g_a is the anchor group title,
u_i is the candidate url
u_a is the anchor url, and,
σ is the sigmoid function

Optimizing the weights

In order to find the weights, we framed the problem as a classification task, where we calculate the precision and recall based on the tabs that were correctly classified given an anchor tab. We used synthetic data generated by OpenAI based on the user archetypes above.

We initially used a clustering approach to establish a baseline and switched to a logistic regression when we realized that treating the group, title and url features with varying importances improved our metrics.

Bar chart comparing DBScan and Logistic Regression by precision, recall, and F1 performance metrics

Using logistic regression, there was an 18% improvement against the baseline.

Performance

While the median number of tabs for people using the feature is relatively small (~25), there are some “power” users whose tab count reaches the thousands. This would cause the tab grouping feature to take uncomfortably long. 

This was part of the reason why we switched from a clustering based approach to a linear model. 

Using our performance framework, we found that the p99 of running logistic regression compared to a clustering based method such as KMeans improved by 33%.

Bar chart comparing KMeans and Logistic Regression using percentile metrics p50, p95, and p99

Future work here would involve improving F1 score. These could be by adding a time-related component as part of the inference (we are more likely to group tabs together that we’ve opened at the same time) or using a fine-tuned embedding model for our use case.

Thanks for reading

All of our work is open source. If you are a developer feel free to peruse our source code on our model training, or view our topic model on Huggingface.

Feel free to try the feature and let us know what you think!

Take control of your internet

Download Firefox

The post Under the hood: How Firefox suggests tab groups with local AI appeared first on The Mozilla Blog.

Wladimir PalantAn overview of the PPPP protocol for IoT cameras

My previous article on IoT “P2P” cameras couldn’t go into much detail on the PPPP protocol. However, there is already lots of security research on and around that protocol, and I have a feeling that there is way more to come. There are pieces of information on the protocol scattered throughout the web, yet every one approaching from a very specific narrow angle. This is my attempt at creating an overview so that other people don’t need to start from scratch.

While the protocol can in principle be used by any kind of device, so far I’ve only seen network-connected cameras. It isn’t really peer-to-peer as advertised but rather relies on central servers, yet the protocol allows to transfer the bulk of data via a direct connection between the client and the device. It’s hard to tell how many users there are but there are lots of apps, I’m sure that I haven’t found all of them.

There are other protocols with similar approaches being used for the same goal. One is used by ThroughTek’s Kalay Platform which has the interesting string “Charlie is the designer of P2P!!” in its codebase (32 bytes long, seems to be used as “encryption” key for some non-critical functionality). I recognize both the name and the “handwriting,” it looks like PPPP protocol designer found a new home here. Yet PPPP seems to be still more popular than the competition, thanks to it being the protocol of choice for cheap low-end cameras.

Disclaimer: Most of the information below has been acquired by analyzing public information as well as reverse engineering applications and firmware, not by observing live systems. Consequently, there can be misinterpretations.

Update (2025-11-07): Added App2Cam Plus app to the table, representing a number of apps which all seem to be belong to ABUS Smartvest Wireless Alarm System.

Update (2025-11-07): This article originally grouped Xiaomi Home together with Yi apps. This was wrong, Xiaomi uses a completely different protocol to communicate with their PPPP devices. A brief description of this protocol has been added.

Update (2025-11-17): Added eWeLink, Owltron, littlelf and ZUMIMALL apps to the table.

The general design

The protocol’s goal is to serve as a drop-in replacement for TCP. Rather than establish a connection to a known IP address (or a name to be resolved via DNS), clients connect to a device identifier. The abstraction is supposed to hide away how the device is located (via a server that keeps track of its IP address), how a direct communication channel is established (via UDP hole punching) or when one of multiple possible fallback scenarios is being used because direct communication is not possible.

The protocol is meant to be resilient, so there are usually three redundant servers handling each network. When a device or client needs to contact a server, it sends the same message to all of them and doesn’t care which one will reply. Note: In this article “network” generally means a PPPP network, i.e. a set of servers and the devices connecting to them. While client applications typically support multiple networks, devices are always associated with a specific one determined by their device prefix.

For what is meant to be a transport layer protocol, PPPP has some serious complexity issues. It encompasses device discovery on the LAN via UDP broadcasts, UDP communication between device/client and the server and a number of (not exactly trivial) fallback solutions. It also features multiple “encryption” algorithms which are more correctly described as obfuscators and network management functionality.

Paul Marrapese’s Wireshark Dissector provides an overview of the messages used by the protocol. While it isn’t quite complete, a look into the pppp.fdesc file shows roughly 70 different message types. It’s hard to tell how all these messages play together as the protocol has not been designed as a state machine. The protocol implementation uses its previous actions as context to interpret incoming messages, but it has little indication as to which messages are expected when. Observing a running system is essential to understanding this protocol.

The complicated message exchange required to establish a connection between a device and a client has been described by Elastic Security Labs. They also provide the code of their client which implements that secret handshake.

I haven’t seen any descriptions of how the fallback approaches work when a direct connection cannot be established. Neither could I observe these fallbacks in action, presumably because the network I observed didn’t enable them. There are at least three such fallbacks: UDP traffic can be relayed by a network-provided server, it can be relayed by a “supernode” which is a device that agreed to be used as a relay, and it can be wrapped in a TCP connection to the server. The two centralized solutions incur significant costs for the network owners, rendering them unpopular. And I can imagine the “supernode” approach to be less than reliable with low-end devices like these cameras (it’s also a privacy hazard but this clearly isn’t a consideration).

I recommend going though the CS2 sales presentation to get an idea of how the protocol is meant to work. Needless to say that it doesn’t always work as intended.

The network ports

I could identify the following network ports being used:

  • UDP 32108: broadcast to discover local devices
  • UDP 32100: device/client communication to the server
  • TCP 443: client communication to the server as fallback

Note that while port 443 is normally associated with HTTPS, here it was apparently only chosen to fool firewalls. The traffic is merely obfuscated, not really encrypted.

The direct communication between the client and the device uses a random UDP port. In my understanding the ports are also randomized when this communication is relayed by a server or supernode.

The device IDs

The canonical representation of a device ID looks like this: ABC-123456-VWXYZ. Here ABC is a device prefix. While a PPPP network will often handle more than one device prefix, mapping a device prefix to a set of servers is supposed to be unambiguous. This rule isn’t enforced across different protocol variants however, e.g. the device prefix EEEE is assigned differently by CS2 and iLnk.

The six digit number following the device prefix allows distinguishing different devices within a prefix. It seems that vendors can choose these numbers freely – some will assign them to devices sequentially, others go by some more complicated rules. A comment on my previous article even claims that they will sometimes reassign existing device IDs to new devices.

The final part is the verification code, meant to prevent enumeration of devices. It is generated by some secret algorithm and allows distinguishing valid device IDs from invalid ones. At least one such algorithm got leaked in the past.

Depending on the application a device ID will not always be displayed in its canonical form. It’s pretty typical for the dashes to be removed for example, in one case I saw the prefix being shortened to one letter. Finally, there are applications that will hide the device ID from the user altogether, displaying only some vendor-specific ID instead.

The protocol variants

So far I could identify at least four variants of this protocol – if you count HLP2P which is questionable. These protocol implementations differ significantly and aren’t really compatible. A number of apps can work with different protocol implementations but they generally do it by embedding multiple client libraries.

Variant Typical client library names Typical functions
CS2 Network libPPCS_API.so libobject_jni.so librtapi.so PPPP_Initialize PPPP_ConnectByServer
Yi Technology PPPP_API.so libmiio_PPPP_API.so PPPP_Initialize PPPP_ConnectByServer
iLnk libvdp.so libHiChipP2P.so XQP2P_Initialize XQP2P_ConnectByServer HI_XQ_P2P_Init
HLP2P libobject_jni.so libOKSMARTPPCS.so HLP2P_Initialize HLP2P_ConnectByServer

CS2 Network

The Chinese company CS2 Network is the original developer of the protocol. Their implementation can sometimes be recognized without even looking at any code just by their device IDs. The letters A, I, O and Q are never present in the verification code, there are only 22 valid letters here. Same seems to apply to the Yi Technology fork however which is generally very similar.

The other giveaway is the “init string” which encodes network parameters. Typically these init strings are hardcoded in the application (sometimes hundreds of them) and chosen based on device prefix, though some applications retrieve them from their servers. These init strings are obfuscated, with the function PPPP_DecodeString doing the decoding. The approach is typical for CS2 Network: a lookup table filled with random values and some random algebraic operations to make things seem more complex. The init strings look like this:

DRFTEOBOJWHSFQHQEVGNDQEXFRLZGKLUGSDUAIBXBOIULLKRDNAJDNOZHNKMJO:SECRETKEY

The part before the colon decodes into:

127.0.0.1,192.168.1.1,10.0.0.1,

This is a typical list of three server IPs. No, the trailing comma isn’t a typo but required for correct parsing. Host names are occasionally used in init strings but this is uncommon. With CS2 Network generally distrusting DNS from the looks of it, they probably recommend vendors to sidestep it. The “secret” key behind the colon is optional and activates encryption of transferred data which is better described as obfuscation. Unlike the server addresses, this part isn’t obfuscated.

Yi Technology

The Xiaomi spinoff Yi Technology appears to have licensed the code of the CS2 Network implementation. They made some moderate changes to it but it is still very similar to the original. For example, they still use the same code to decode init strings, merely with a different lookup table. Consequently, same init string as above would look slightly differently here:

LZERHWKWHUEQKOFUOREPNWERHLDLDYFSGUFOJXIXJMASBXANOTHRAFMXNXBSAM:SECRETKEY

As can be seen from Paul Marrapese’s Wireshark Dissector, the Yi Technology fork added a bunch of custom protocol messages and extended two messages presumably to provide forward compatibility. The latter is a rather unusual step for the PPPP ecosystem where the dominant approach seems to be “devices and clients connecting to the same network always use the same version of the client library which is frozen for all eternity.”

There is another notable difference: this PPPP implementation doesn’t contain any encryption functionality. There seems to be some AES encryption being performed at the application layer (which is the proper way to do it), I didn’t look too closely however.

iLnk

The protocol fork developed by Shenzhen Yunni Technology iLnkP2P seems to have been developed from scratch. The device IDs for legacy iLnk networks are easy to recognize because their verification codes only consist of the letters A to F. The algorithm generating these verification codes is public knowledge (CVE-2019-11219) so we know that these are letters taken from an MD5 hex digest. New iLnk networks appear to have verification codes that can contain all Latin letters, some new algorithm replaced the compromised one here. Maybe they use Base64 digests now?

An iLnk init string can be recognized by the presence of a dash:

ATBBARASAXAOAQAOAQAOARBBARAZASAOARAWAYAOARAOARBBARAQAOAQAOAQAOAR-$$

The part before the dash decodes into:

3;127.0.0.1;192.168.1.1;10.0.0.1

Yes, the first list entry has to specify how many server IPs there are. The decoding approach (function HI_DecStr or XqStrDec depending on the implementation) is much simpler here, it’s a kind of Base26 encoding. The part after the dash can encode additional parameters related to validation of device IDs but typically it will be $$ indicating that it is omitted and network-specific device ID validation can be skipped. As far as I can tell, iLnk networks will always send all data as plain text, there is no encryption functionality of any kind.

Going through the code, the network-level changes in the iLnk fork are extensive, with only the most basic messages shared with the original PPPP protocol. Some message types are clashing like for example MSG_DEV_MAX that uses the same type as MSG_DEV_LGN_CRC in the CS2 implementation. This fork also introduces new magic numbers: while PPPP messages normally start with 0xF1, some messages here start with 0xA1 and one for some reason with 0xF2.

Unfortunately, I haven’t seen any comprehensive analysis of this protocol variant yet, so I’ll just list the message types along with their payload sizes. For messages with 20 bytes payloads it can be assumed that the payload is a device ID. Don’t ask me why two pairs of messages share the same message type.

Message Message type Payload size
MSG_HELLO F1 00 0
MSG_RLY_PKT F1 03 0
MSG_DEV_LGN F1 10 IPv4: 40
IPv6: 152
MSG_DEV_MAX F1 12 20
MSG_P2P_REQ F1 20 IPv4: 36
IPv6: 152
MSG_LAN_SEARCH F1 30 0
MSG_LAN_SEARCH_EXT F1 32 0
MSG_LAN_SEARCH_EXT_ACK F1 33 52
MSG_DEV_UNREACH F1 35 20
MSG_PUNCH_PKT F1 41 20
MSG_P2P_RDY F1 42 20
MSG_RS_LGN F1 60 28
MSG_RS_LGN_EX F1 62 44
MSG_LST_REQ F1 67 20
MSG_RLY_HELLO F1 70 0
MSG_RLY_HELLO_ACK F1 71 0
MSG_RLY_PORT F1 72 0
MSG_RLY_PORT_ACK F1 73 8
MSG_RLY_PORT_EX_ACK F1 76 264
MSG_RLY_REQ_EX F1 77 288
MSG_RLY_REQ F1 80 IPv4: 40
IPv6: 160
MSG_HELLO_TO_ACK F1 83 28
MSG_RLY_RDY F1 84 20
MSG_SDEV_LGN F1 91 20
MSG_MGM_ADMIN F1 A0 160
MSG_MGM_DEVLIST_CTRL F1 A2 20
MSG_MGM_HELLO F1 A4 4
MSG_MGM_MULTI_DEV_CTRL F1 A6 variable
MSG_MGM_DEV_DETAIL F1 A8 24
MSG_MGM_DEV_VIEW F1 AA 4
MSG_MGM_RLY_LIST F1 AC 12
MSG_MGM_DEV_CTRL F1 AE 24
MSG_MGM_MEM_DB F1 B0 264
MSG_MGM_RLY_DETAIL F1 B2 24
MSG_MGM_ADMIN_LGOUT F1 BA 4
MSG_MGM_ADMIN_CHG F1 BC 164
MSG_VGW_LGN F1 C0 24
MSG_VGW_LGN_EX F1 C0 24
MSG_VGW_REQ F1 C3 20
MSG_VGW_REQ_ACK F1 C4 4
MSG_VGW_HELLO F1 C5 0
MSG_VGW_LST_REQ F1 C6 20
MSG_DRW F1 D0 variable
MSG_DRW_ACK F1 D1 variable
MSG_P2P_ALIVE F1 E0 0
MSG_P2P_ALIVE_ACK F1 E1 0
MSG_CLOSE F1 F0 0
MSG_MGM_DEV_LGN_DETAIL_DUMP F1 F4 12
MSG_MGM_DEV_LGN_DUMP F1 F4 12
MSG_MGM_LOG_CTRL F1 F7 12
MSG_SVR_REQ F2 10 0
MSG_DEV_LV_HB A1 00 20
MSG_DEV_SLP_HB A1 01 20
MSG_DEV_QUERY A1 02 20
MSG_DEV_WK_UP_REQ A1 04 20
MSG_DEV_WK_UP A1 06 20

HLP2P

While I’ve seen a few of apps with HLP2P code and the corresponding init strings, I am not sure whether these are still used or merely leftovers from some past adventure. All these apps use primarily networks that rely on other protocol implementations.

HLP2P init strings contain a dash which follows merely three letters. These three letters are ignored and I am unsure about their significance as I’ve only seen one variant:

DAS-0123456789ABCDEF

The decoding function is called from HLP2P_Initialize function and uses the most elaborate approach of all. The hex-encoded part after the dash is decrypted using AES-CBC where the key and initialization vector are derived from a zero-filled buffer via some bogus MD5 hashing. The decoded result is a list of comma-separated parameters like:

DCDC07FF,das,10000001,a+a+a,127.0.0.1-192.168.1.1-10.0.0.1,ABC-CBA

The fifth parameter is a list of server IP addresses and the sixth appears to be the list of supported device prefixes.

On the network level HLP2P is an oddity here. Despite trying hard to provide the same API as other PPPP implementations, including concepts like init strings and device IDs, it appears to be a TCP-based protocol (connecting to server’s port 65527) with little resemblance to PPPP. UDP appears to be used for local broadcasts only (on port 65531). I didn’t spend too much time on the analysis however.

“Encryption”

The CS2 implementation of the protocol is the only one that bothers with encrypting data, though their approach is better described as obfuscation. When encryption is enabled, the function P2P_Proprietary_Encrypt is applied to all outgoing and the function P2P_Proprietary_Decrypt to all incoming messages. These functions take the encryption key (which is visible in the application code as an unobfuscated part of the init string) and mash it into four bytes. These four bytes are then used to select values from a static table that the bytes of the message should be XOR’ed with.

There is at least one public implementation of this “encryption” though this one chose to skip the “key mashing” part and simply took the resulting four bytes as its key. A number of articles mention having implemented this algorithm however, it’s not really complicated.

The same obfuscation is used unconditionally for TCP traffic (TCP communication on port 443 as fallback). Here each message header contains two random bytes. The hex representation of these bytes is used as key to obfuscate message contents.

All *_CRC messages like MSG_DEV_LGN_CRC have an additional layer of obfuscation, performed by the functions PPPP_CRCEnc and PPPP_CRCDec. Unlike P2P_Proprietary_Encrypt which is applied to the entire message including the header, PPPP_CRCEnc is only applied to the payload. As normally only messages exchanged between the device and the server are obfuscated in this way, the corresponding key tends to be contained only in the device firmware and not in the application. Here as well the key is mashed into four bytes which are then used to generate a byte sequence that the message (extended by four + signs) is XOR’ed with. This is effectively an XOR cipher with a static key which is easy to crack even without knowing the key.

“Secret” messages

The CS2 implementation of the protocol contains a curiosity: two messages starting with 338DB900E559 being processed in a special way. No, this isn’t a hexadecimal representation of the bytes – it’s literally the message contents. No magic bytes, no encryption, the messages are expected to be 17 bytes long and are treated as zero-terminated strings.

I tried sending 338DB900E5592B32 (with a trailing zero byte) to a PPPP server and, surprisingly, received a response (non-ASCII bytes are represented as escape sequences):

\x0e\x0ay\x07\x08uT_ChArLiE@Cs2-NeTwOrK.CoM!

This response was consistent for this server, but another server of the same network responded slightly differently:

\x0e\x0ay\x07\x08vT_ChArLiE@Cs2-NeTwOrK.CoM!

A server from a different network which normally encrypts all communication also responded:

\x17\x06f\x12fDT_ChArLiE@Cs2-NeTwOrK.CoM!

It doesn’t take a lot of cryptanalysis knowledge to realize that an XOR cipher with a constant key is being applied here. Thanks to my “razor sharp deduction” I could conclude that the servers are replying with their respective names and these names are being XOR’ed with the string CS2MWDT_ChArLiE@Cs2-NeTwOrK.CoM!. Yes, likely the very same Charlie already mentioned at the start of this article. Hi, Charlie!

I didn’t risk sending the other message, not wanting to shut down a server accidentally. But maybe Shodan wants to extend their method of detecting PPPP servers: their current approach only works when no encryption is used, yet this message seems to get replies from all CS2 servers regardless of encryption.

Applications

Once a connection between the client and the device is established, MSG_DRW messages are exchanged in both directions. The messages will be delivered in order and retransmitted if lost, giving application developers something resembling a TCP stream if you don’t look too closely. In addition, each message is tagged with a channel ID, a number between 0 and 7. It looks like channel IDs are universally ignored by devices and are only relevant in the other direction. The idea seems to be that a client receiving a video stream should still be able to send commands to the device and receive responses over the same connection.

The PPPP protocol doesn’t make any recommendations about how applications should encode their data within that stream, and so they developed a number of wildly different application-level protocols. As a rule of thumb, all devices and clients on a particular PPPP network will always speak the same application-level protocol, though there might be slight differences in the supported capabilities. Different networks can share the same protocol, allowing them to be supported within the same application. Usually, there will be multiple applications implementing the same application-level protocol and working with the same PPPP networks, but I haven’t yet seen any applications supporting different protocols.

This allows grouping the applications by their application-level protocol. Applications within the same group are largely interchangeable, same devices can be accessed from any application. This doesn’t necessarily mean that everything will work correctly, as there might still be subtle differences. E.g. an application meant for visual doorbells probably accesses somewhat different functionality than one meant for security cameras even if both share the same protocol. Also, devices might be tied to the cloud infrastructure of a specific application, rendering them inaccessible to other applications working with the same PPPP network.

Fun fact: it is often very hard to know up front which protocol your device will speak. There is a huge thread with many spin-offs where people are attempting to reverse engineer A9 Mini cameras so that these can be accessed without an app. This effort is being massively complicated by the fact that all these cameras look basically the same, yet depending on the camera one out of at least four extremely different protocols could be used: HDWifiCamPro variant of SHIX JSON, YsxLite variant of iLnk binary, JXLCAM variant of CGI calls, or some protocol I don’t know because it isn’t based on PPPP.

The following is a list of PPPP-based applications I’ve identified so far, at least the ones with noteworthy user numbers. Mind you, these numbers aren’t necessarily indicative of the number of PPPP devices – some applications listed only use PPPP for some devices, likely using other protocols for most of their supported devices (particularly the ones that aren’t cameras). I try to provide a brief overview of the application-level protocol in the footnotes. Disclaimer: These applications tend to support a huge number of device prefixes in theory, so I mostly chose the “typical” ones based on which ones appear in YouTube videos or GitHub discussions.

Application Typical device prefixes Application-level protocol
Xiaomi Home XMSYSGB JSON (MISS) 1
Kami Home
Yi Home
Yi iot
TNPCHNA TNPCHNB TNPUSAC TNPUSAM TNPXGAC binary 2
littlelf smart
Owltron
Tuya - Smart Life,Smart Living
TUYASA binary (Thing SDK / Tuya SDK) 3
365Cam
CY365
Goodcam
HDWifiCamPro
PIX-LINK CAM
VI365
X-IOT CAM
DBG DGB DGO DGOA DGOC DGOE NMSA PIXA PIZ JSON (SHIX) 4
eWeLink - Smart Home EWLK binary (iCareP2P) 5
Eye4
O-KAM Pro
Veesky
EEEE VSTA VSTB VSTC VSTD VSTF VSTJ CGI calls 6
CamHi
CamHipro
AAFF EEEE MMMM NNNN PPPP SSAA SSAH SSAK SSAT SSSS TTTT binary 7
CloudEdge
ieGeek Cam
ZUMIMALL
ECIPCM binary (Meari SDK) 8
YsxLite BATC BATE PTZ PTZA PTZB TBAT binary (iLnk) 9
FtyCamPro FTY FTYA FTYC FTZ FTZW binary (iLnk) 10
JXLCAM ACCQ BCCA BCCQ CAMA CGI calls 11
LookCam BHCC FHBB GHBB JSON 12
HomeEye
LookCamPro
StarEye
AYS AYSA TUT JSON (SHIX) 13
minicam CAM888 CGI calls 14
App2Cam Plus CGAG CMAG CTAI WGAG binary (Jsw SDK) 15

  1. Each message starts with a 4 byte command ID. The initial authorization messages (command ID 0x100 and 0x101) contain plain JSON data. Other messages contain ChaCha20-encoded data: first 8 bytes nonce, then the ciphertext. The encryption key is negotiated in the authorization phase. The decrypted plaintext again starts with a 4 byte command ID, followed by JSON data. There is even some Chinese documentation of this interface though it is rather underwhelming. ↩︎

  2. The device-side implementation of the protocol is available on the web. This doesn’t appear to be reverse engineered, it’s rather the source code of the real thing complete with Chinese comments. No idea who or why published this, I found it linked by the people who develop own changes to the stock camera firmware. The extensive tnp_eventlist_msg_s structure being sent and received here supports a large number of commands. ↩︎

  3. Each message is preceded by a 16 byte header: 78 56 34 12 magic bytes, request ID, command ID, payload size. This is a very basic interface exposing merely 10 commands, most of which are requesting device information while the rest control video/audio playback. As Tuya SDK also communicates with devices by means other than PPPP, more advanced functionality is probably exposed elsewhere. ↩︎

  4. Messages are preceded by an 8 byte binary header: 06 0A A0 80 magic bytes, four bytes payload size (there is a JavaScript-based implementation). The SHIX JSON format is a translation of this web API interface: /check_user.cgi?user=admin&pwd=pass becomes {"pro": "check_user", "cmd": 100, "user": "admin", "pwd": "pass"}. The pro and cmd fields are redundant, representing a command both as a string and as a number. ↩︎

  5. Each message is preceded by a 24 byte header starting with the magic bytes 88 88 76 76, payload size and command ID. The other 12 bytes of the header are unused. More than 60 command IDs are supported, each with its own binary payload format. Some very basic commands have been documented in a HomeAssistant component↩︎

  6. The binary message headers are similar to the ones used by apps like 365Cam: 01 0A 00 00 magic bytes, four bytes payload size. The payload is however a web request loosely based on this web API interface: GET /check_user.cgi?loginuse=admin&loginpas=pass&user=admin&pwd=pass. Yes, user name and password are duplicated, probably because not all devices expect loginuse/loginpas parameters? You can see in this article what the requests looks like. ↩︎

  7. The 24 byte header preceding messages is similar to eWeLink: magic bytes 99 99 99 99, payload size and command ID. The other 12 bytes of the header are unused. Not trusting PPPP, CamHi encrypts the payload using AES. It looks like the encryption key is an MD5 hash of a string containing the user name and password among other things. Somebody published some initial insights into the application code↩︎

  8. Each message is preceded by a 52 byte header starting with the magic bytes 56 56 50 99. Bulk of this header is taken up by an authentication token: a SHA1 hex digest hashing the username (always admin), device password, sequence number, command ID and payload size. The implemented interface provides merely 14 very basic commands, essentially only exposing access to recordings and the live stream. So the payload even where present is something trivial like a date. As Meari SDK also communicates with devices by means other than PPPP, more advanced functionality is probably exposed elsewhere. ↩︎

  9. The commands and their binary representation are contained within libvdp.so which is the iLnk implementation of the PPPP protocol. Each message is preceded by a 12 bytes header starting with the 11 0A magic bytes. The commands are two bytes long with the higher byte indicating the command type: 2 for SD card command, 3 for A/V command, 4 for file command, 5 for password command, 6 for network command, 7 for system command. ↩︎

  10. While FtyCamPro app handles different networks than YsxLite, it relies on the same libvdp.so library, meaning that the application-level protocol should be the same. It’s possible that some commands are interpreted differently however. ↩︎

  11. The protocol is very similar to the one used by VStarcam apps like O-KAM Pro. The payload has only one set of credentials however, the parameters user and pwd. It’s also a far more limited and sometimes different set of commands. ↩︎

  12. Each message is wrapped in binary data: a prefix starting with A0 AF AF AF before it, the bytes F4 F3 F2 F1 after. For some reason the prefix length seems to be different depending on whether the message is sent to the device (26 bytes) or received from it (25 bytes). I don’t know what most of it is yet everything but the payload length at the end of the prefix seems irrelevant. This Warwick University paper has some info on the JSON payload. It’s particularly notable that the password sent along with each command isn’t actually being checked. ↩︎

  13. LookCamPro & Co. share significant amounts of code with the SHIX apps like 365Cam, they implement basically the same application-level protocol. There are differences in the supported commands however. It’s difficult to say how significant these differences are because all apps contain significant amounts of dead code, defining commands that are never used and probably not even supported. ↩︎

  14. The minicam app seems to use almost the same protocol as VStarcam apps like O-KAM Pro. It handles other networks however. Also, a few of the commands seem different from the ones used by O-KAM Pro, though it is hard to tell how significant these incompatibilities really are. ↩︎

  15. Each message is preceded by a 4 bytes header: 3 bytes payload size, 1 byte I/O type (1 for AUTH, 2 for VIDEO, 3 for AUDIO, 4 for IOCTRL, 5 for FILE). The payload starts with a type-specific header. If I read the code correctly, the first 16 bytes of the payload are encrypted with AES-ECB (unpadded) while the rest is sent unchanged. There is an “xor byte” in the payload header which is changed with every request seemingly to avoid generating identical ciphertexts. Payloads smaller than 16 bytes are not encrypted. I cannot see any initialization of the encryption key beyond filling it with 32 zero bytes, which would mean that this entire mechanism is merely obfuscation. ↩︎

Niko MatsakisBut then again...maybe alias?

Hmm, as I re-read the post I literally just posted a few minutes ago, I got to thinking. Maybe the right name is indeed Alias, and not Share. The rationale is simple: alias can serve as both a noun and a verb. It hits that sweet spot of “common enough you know what it means, but weird enough that it can be Rust Jargon for something quite specific”. In the same way that we talk about “passing a clone of foo” we can talk about “passing an alias to foo” or an “alias of foo”. Food for thought! I’m going to try Alias on for size in future posts and see how it feels.

Niko MatsakisBikeshedding `Handle` and other follow-up thoughts

There have been two major sets of responses to my proposal for a Handle trait. The first is that the Handle trait seems useful but doesn’t over all the cases where one would like to be able to ergonomically clone things. The second is that the name doesn’t seem to fit with our Rust conventions for trait names, which emphasize short verbs over nouns. The TL;DR of my response is that (1) I agree, this is why I think we should work to make Clone ergonomic as well as Handle; and (2) I agree with that too, which is why I think we should find another name. At the moment I prefer Share, with Alias coming in second.

Handle doesn’t cover everything

The first concern with the Handle trait is that, while it gives a clear semantic basis for when to implement the trait, it does not cover all the cases where calling clone is annoying. In other words, if we opt to use Handle, and then we make creating new handles very ergonomic, but calling clone remains painful, there will be a temptation to use the Handle when it is not appropriate.

In one of our lang team design meetings, TC raised the point that, for many applications, even an “expensive” clone isn’t really a big deal. For example, when writing CLI tools and things, I regularly clone strings and vectors of strings and hashmaps and whatever else; I could put them in an Rc or Arc but I know it just doens’t matter.

My solution here is simple: let’s make solutions that apply to both Clone and Handle. Given that I think we need a proposal that allows for handles that are both ergonomic and explicit, it’s not hard to say that we should extend that solution to include the option for clone.

The explicit capture clause post already fits this design. I explicitly chose a design that allowed for users to write move(a.b.c.clone()) or move(a.b.c.handle()), and hence works equally well (or equally not well…) with both traits

The name Handle doesn’t fit the Rust conventions

A number of people have pointed out Handle doesn’t fit the Rust naming conventions for traits like this, which aim for short verbs. You can interpret handle as a verb, but it doesn’t mean what we want. Fair enough. I like the name Handle because it gives a noun we can use to talk about, well, handles, but I agree that the trait name doesn’t seem right. There was a lot of bikeshedding on possible options but I think I’ve come back to preferring Jack Huey’s original proposal, Share (with a method share). I think Alias and alias is my second favorite. Both of them are short, relatively common verbs.

I originally felt that Share was a bit too generic and overly associated with sharing across threads – but then I at least always call &T a shared reference1, and an &T would implement Share, so it all seems to work well. Hat tip to Ariel Ben-Yehuda for pushing me on this particular name.

Coming up next

The flurry of posts in this series have been an attempt to survey all the discussions that have taken place in this area. I’m not yet aiming to write a final proposal – I think what will come out of this is a series of multiple RFCs.

My current feeling is that we should add the Hand^H^H^H^H, uh, Share trait. I also think we should add explicit capture clauses. However, while explicit capture clauses are clearly “low-level enough for a kernel”, I don’t really think they are “usable enough for a GUI” . The next post will explore another idea that I think might bring us closer to that ultimate ergonomic and explicit goal.


  1. A lot of people say immutable reference but that is simply accurate: an &Mutex is not immutable. I think that the term shared reference is better. ↩︎

This Week In RustThis Week in Rust 624

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is dioxus, a framework for building cross-platform apps.

Thanks to llogiq for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

480 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly positive week. We saw a great performance win implemented by #148040 and #148182, which optimizes crates with a lot of trivial constants.

Triage done by @kobzol.

Revision range: 23fced0f..35ebdf9b

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.8% [0.1%, 2.9%] 22
Regressions ❌
(secondary)
0.5% [0.1%, 1.7%] 48
Improvements ✅
(primary)
-2.8% [-16.4%, -0.1%] 102
Improvements ✅
(secondary)
-1.9% [-8.0%, -0.1%] 51
All ❌✅ (primary) -2.1% [-16.4%, 2.9%] 124

4 Regressions, 6 Improvements, 7 Mixed; 7 of them in rollups 36 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only) Language Reference Leadership Council

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-11-05 - 2025-12-03 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If someone opens a PR introducing C++ to your Rust project, that code is free as in "use after"

Predrag Gruevski on Mastodon

Thanks to Brett Witty for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Add-on ReviewsSupercharge your productivity with a Firefox extension

With more work and education happening online you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right Firefox extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content 

Raindrop.io

Organize anything you find on the web with Raindrop.io — news articles, videos, PDFs, and more.

Raindrop.io makes it simple to gather clipped web content by subject matter and organize with ease by applying tags, filters, and in-app search. This extension is perfectly suited for projects that require gathering and organizing lots of mixed media.

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize it all into shareable topics or collections.

<figcaption class="wp-element-caption">With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo and Raindrop.io, Evernote Web Clipper offers a kindred feature set — clip, save, and share web content — albeit with some nice user interface distinctions. 

Evernote makes it easy to annotate images and articles for collaborative projects. It also has a strong internal search feature, allowing you to look for specific words and phrases that might appear across scattered collections of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Notefox

Wouldn’t it be great if you could leave yourself little sticky notes anywhere you wanted around the web? Well now you can with Notefox.

Leave notes on specific web pages or entire domains. You can access all your notes from a central repository so everything is easy to find. The extension also includes a helpful auto-save feature so you’ll never lose a note.

Print Edit WE

If you need to save or print an important web page — but it’s mucked up with a bunch of unnecessary clutter like ads, sidebars, and other peripheral distractions — Print Edit WE lets you easily remove those unwanted elements.

Along with a host of great features like the option to save web pages as either HTML or PDF files, automatically delete graphics, and the ability to alter text or add notes, Print Edit WE also provides an array of productivity optimizations like keyboard shortcuts and mouse gestures. This is the ideal productivity extension for any type of work steeped in web research and cataloging.

Focus! Focus! Focus!

Anti-distraction and decluttering extensions can provide a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster. 

<figcaption class="wp-element-caption">Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site.</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities — like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potential productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Time Tracker

See how much time you spend on every website you visit. Time Tracker provides a granular view of your web habits.

If you find you’re spending too much time on certain websites, Time Tracker offers a block site feature to break the bad habit.

Tabby – Window & Tab Manager

Are you overwhelmed by lots of open tabs and windows? Need an easy way to overcome desktop chaos? Tabby – Window & Tab Manager to the rescue.

Regain control of your ever-sprawling open tabs and windows with an extension that lets you quickly reorganize everything. Tabby makes it easy to find what you need in a chaotic sea of open tabs — you can word/phrase search for what you’re looking for, of use Tabby’s visual preview feature to see little thumbnail images of your open tabs without actually navigating to them. And whenever you need a clean slate but want to save your work, you can save and close all of your open tabs with a single mouse click and return to them later.

<figcaption class="wp-element-caption">Access all of Tabby’s features in one convenient pop-up. </figcaption>

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away — no more distracting images, ads, tempting links to related stories, nothing — just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later, customizable font size and colors, add annotations to saved pages, and more. 

Checker Plus for Gmail

Stop wasting time bouncing between the web and your Gmail app. Checker Plus for Gmail puts your inbox and more right into Firefox’s toolbar so it’s with you wherever you go on the internet.

See email notifications, read, reply, delete, mark as ‘read’ and more — all within a convenient browser pop-up.

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions that can help with productivity — everything from ways to organize tons of open tabs to translation tools to bookmark managers and more. 

Chris H-CTen-Year Moziversary

I’m a few days late publishing this, but this October marks the tenth anniversary of my first day working at Mozilla. I’m on my third hardware refresh (a Dell XPS which I can’t recommend), still just my third CEO, and now 68 reorgs in.

For something as momentous as breaking into two-digit territory, there’s not really much that’s different from last year. I’m still trying to get Firefox Desktop to use Glean instead of Legacy Telemetry and I’m still not blogging nearly as much as I’d like. Though, I did get promoted earlier this year. I am now a Senior Staff Software Engineer, which means I’m continuing on the journey of doing fewer things myself and instead empowering other people to do things.

As for predictions, I was spot on about FOG Migration actually taking off a little — in fact, quite a lot. All data collection in Firefox Desktop now either passes through Glean to get to Legacy Telemetry, has Glean mirroring alongside it, or has been removed. This is in large part thanks to a big help from Florian Quèze and his willingness to stop asking when we could start and just migrate the codebase. Now we’re working on moving the business data calculations onto Glean-sent data, and getting individual teams to change over too. If you’re reading this and were looking for an excuse to remove Legacy Telemetry from your component, this is your excuse.

My prediction that there’d be an All Hands was wrong. Mozilla Leadership has decided that the US is neither a place they want to force people to travel to nor is it a place they want to force people to travel out of (and then need to attempt to return to) in the current political climate. This means that business gatherings of any size are… complicated. Some teams have had simultaneous summits in cities both within and without the US. Some teams have had one or the other side call in virtually from their usual places of work. And our team… well, we’ve not gathered at all. Which is a bummer, since we’ve had a few shuffles in the ranks and it’d be good to get us all in one place. (I will be in Toronto with some fellow senior Data Engineering folks before the end of the year, but that’s the extent of work travel.) I’m broadly in favour of removing the requirement and expectation of travel over the US border — too many people have been disappeared in too many ways. We don’t want to make anyone feel as though they have to risk it. But it seems as though we’re also leaning away from allowing people to risk it if they want to, which is a level of paternalism that I didn’t want to see.

I did have one piece of “work” travel in that I attended CSV Conf in Bologna, Italy. Finally spent my Professional Development budget, and wow what a great investment. I learned so much and had a great time, and that was despite the heat and humidity (goodness, Italy. I was in your North (ish). In September. Why you gotta 30degC me like this?). I’m on the lookout for other great conferences to attend in 2026, so if you know any, get in touch.

My prediction that I’d still be three CEOs in because the search for a new one wouldn’t have completed by now: spot on. Ditto on executing my hardware refresh, though I’m still using a personal monitor at work. I should do something about that.

My prediction that we’d stop putting AI in everything has partially come true. There’s been a noticeable shift away from “Put genAI in it and find a problem for it to (maybe) solve” towards “If you find a problem that genAI can help with, give it a try.” You wouldn’t notice it, necessarily, looking at feature announcements for Firefox, as quite a lot of the integration infrastructure all landed in the past couple of months, making headlines. My feelings on LLMs and genAI have gained layers and nuance since last year. They’re still plagiarism machines that are illegally built by the absolute worst people in ways that worsen the climate catastrophe and entrench existing inequalities. But now they’ve apparently become actually useful in some ways. I’ve read reports from very senior developers about use cases that LLMs have been able to assist with. They are narrow use cases — you must only use it to work on components you understand well, you must only use it on tasks you would do yourself if you had the time and energy — but they’re real. And that means my usual hard line of “And even if you ignore the moral, ethical, environmental, economic, and industry concerns about using LLMs: they don’t even work” no longer applies. And in situations like a for-profit corporation lead by people from industry… ignoring the moral, ethical, environmental, economic, and industry concerns is de rigeur.

Add these to the sorta-kinda-okay things LLMs can do like natural language processing and aiding in training and refinement of machine translation models, and it looks as though we’re figuring out the “reheat the leftovers” and “melt butter and chocolate” use cases for these microwave ovens.

It still remains to be seen if, after the bubble pops, these nuclear-powered lake-draining art-stealing microwaves will find a home in many kitchens. I expect the fully-burdened cost will be awfully prohibitive for individuals who just want it to poorly regurgitate Wikipedia articles in a chat interface. It might even be too spicy for enterprises who think (likely erroneously) that they confer some instantaneous and generous productivity multiplier. Who knows.

All I know is that I still don’t like it. But I’ll likely find myself using one before the end of the year. If so, I intend to write up the experience and hopefully address my blogging drought by publishing it here.

Another thing that happened this year that I alluded to in last year’s post was the Google v DOJ ruling in the US. Well, the first two rulings anyway. Still years of appeal to come, but even the existing level of court seemed to agree that the business model that allows Mozilla to receive a bucketload of dollabux from Google for search engine placement in Firefox (aka, the thing that supplies most of my paycheque) should not be illegal at this time. Which is a bit of a relief. One existential threat to the business down… for now.

But mostly? This year has been feeling a little like 2016 again. Instead of The Internet of Things (IoT, where the S stands for Security), it’s genAI. Instead of Mexico and Muslims it’s Antifa and Trans people. The Jays are in the postseason again. Shit’s fucked and getting worse. But in all that, someone still has to rake the leaves and wash the dishes. And if I don’t do it, it won’t get done.

With that bright spot highlighted, what are my predictions for the new year:

  • I will requisition a second work monitor so I stop using personal hardware for work things.
  • FOG Migration (aka the Instrumentation Consolidation Project) will not fully remove all of Legacy Telemetry by this time next year. There’s evidence of cold feet on the “change business metrics to Glean-sent data” front, and even if there weren’t, there’s such a long tail that there’s no doubt something load-bearing that’d delay things to Q4 2025. I _am_ however predicting that FOG Migration will no longer being all-encompassing work — I will have a chance to do something else with my time.
  • I predict that one of the things I will do with that extra time is, since MoCo insists on a user population measurement KPI, push for a sensible user population measurement. Measuring the size of the user population by counting distinct _profiles_ we’ve _received_ a data packet from on a day (not that the data was collected on that day)? We can do better.
  • I don’t think there’s going to be an All Hands next year. If there is, I’d expect it to be Summit style: multiple cities simultaneously, with video links. Fingers crossed for Toronto finally getting its chance. Though I suppose if the people of the US rose up and took back their country, or if the current President should die, that could change the odds a little. Other US administrations saw the benefit of freedom of movement, regardless of which side of the aisle.
  • Maybe the genAI bubble will have burst? Timing these things is impossible, even if it weren’t the first time in history that this much of the US’ (and world’s) economy is inflating it. The sooner it bursts, the better, as it’s only getting bigger. (I suppose an alternative would be for the next shiny thing to happen along and the interest in genAI to dwindle more slowly with no single burst, just a bunch of crashes. Like blockchain/web3/etc. In that case a slower diminishing would be better than a sooner burst.)
  • I predict that a new MoCo CEO will have been found, but not yet sworn in by this time next year. I have no basis for this prediction: vibes only.

To another year of supporting the Mission!

:chutten

Mozilla Localization (L10N)L10n Report: November Edition 2025

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

What’s new or coming up in Firefox desktop

Firefox Backup

Firefox backup is a new feature being introduced in Firefox 145, currently testable in Beta and Nightly behind a preference flag. See here for instructions on how to test this feature.

This feature allows users to save a backup of their Firefox data to their local device at regular intervals, and later use that backup to restore their browser data or migrate their browser to a new device. One of the use cases is for current Windows 10 users who may be migrating to a new Windows 11 device. The user can save their Firefox backup to OneDrive, and later after setting up their new device can then install Firefox and restore their browsing data from the backup saved in OneDrive.

This is an alternative to using the sync functionality in combination with a Mozilla account.

Settings Redesign

Coming up in future releases, the current settings menu is being re-organized and re-designed to be more user friendly and easier to understand. New strings will be rolling out with relative frequency, but they can’t be viewed or tested in Beta or Nightly yet. If you encounter anything where you need additional context, please feel free to use the request context button in Pontoon or drop into our localization matrix channel where you can get the latest updates and engage with your fellow localizers from around the world.

What’s new or coming up in mobile

Here’s what’s been going on in Firefox for Android land lately: you may have noticed strings landing for the Toolbar refresh, the tab tray layout, as well as for a homepage revamp. All of this is work is ongoing, so expect to see more strings landing soon!

On the Firefox for iOS side, there have been improvements to Search along with a revamp of the menu and tab tray. Ongoing work continues on the Translations feature integration, the homepage revamp, and the toolbar refresh.

More updates coming soon — stay tuned!

What’s new or coming up in web projects

AMO and AMO Frontend

The team has been working on identifying and removing obsolete strings to minimize unnecessary translation effort especially the locales that are still catching on. Recently they removed an additional 160 or so strings.

To remain in production, a locale must have both projects at or above 80% completion. If only one project meets the threshold, neither will be enabled. This policy helps prevent users from unintentionally switching between their preferred language and English. Please review your locale to confirm both projects are localized and in good standing.

If a locale already in production falls below the threshold, the team will be notified. Each month, they will review the status of all locales and manually add or remove them from production as needed.

Mozilla accounts

The Mozilla accounts team has been working on the ability to customize surfaces for the various projects that rely on Mozilla accounts for account management such as sync, Mozilla VPN, and others. This customization applies only to a predetermined set of pages (such as sign-in, authentication, etc.) and emails (sign-up confirmation, sign-in verification code, etc.)  and is managed through a content management system. This CMS process bypasses the typical build process and as a result changes are shown in production within a very short time-frame (within minutes). Each customization requires an instance of a string, even if that value hasn’t changed, so this can result in a large number of identical strings being created.

This project will be managed in a new “Mozilla accounts CMS” project within Pontoon instead of the main “Mozilla accounts” project. We are doing this for a couple reasons:

  • To reduce or eliminate the need to translate duplicate strings: In most cases it’s best to have different strings to allow for translation adjustments depending on context, however due to the nature of this project, identical strings for the same page element (e.g. “button”) will use a single translation. For example, all buttons with the text “Sign in” will only require a single translation. This has reduced the number of strings requiring translation by over 50% already, and will reduce the number of additional strings in the future.
  • To enable pretranslation: Important note – this only applies to locales that have opted-in to the pretranslation feature. Due to the CMS string process skipping the normal build cycle and being exposed to production near instantaneously, there’s a high likelihood that untranslated strings may be shown in English before teams have the chance to translate. If a locale has opted in for pretranslation, then the “Mozilla accounts CMS” project will have pretranslation enabled by default and show pretranslated strings until the team has a chance to review and update strings. If your locale has decided not to use the pretranslation feature, then nothing will change and translated strings will be displayed once your team has them translated and approved in Pontoon.

Newly published localizer facing documentation

We’ve recently updated our testing instructions for Firefox for Android and for Firefox for iOS! If you spot anything that could be improved, please file an issue — we’d love your feedback.

Friends of the Lion

Image by Elio Qoshi

  • We’ve started a new blog series spotlighting amazing contributors from Mozilla’s localization community. The first one features Selim of the Turkish community.
  • A second localizer spotlight was published! This time, meet Bogo, a long-time contributor to Bulgarian projects.

Want to learn more from your fellow contributors? Who would you like to be featured? You are invited to nominate the next candidate!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Mozilla Privacy BlogPathways to a fairer digital world: Mozilla shares views on the EU Digital Fairness Act

The Digital Fairness Act (DFA) is a defining opportunity to modernise Europe’s consumer protection framework for the digital age. Mozilla welcomes the European Commission’s ambition to ensure that digital environments are fair, open, and respecting of user autonomy.

As online environments are increasingly shaped by manipulative design, pervasive personalization, and emerging AI systems, traditional transparency and consent mechanisms are no longer sufficient. The DFA must therefore address how digital systems are designed and operated – from interface choices to system-level defaults and AI-mediated decision-making.

Mozilla believes the  DFA, if designed in a smart way, will complement existing legislation (such as GDPR, DSA, DMA, AI Act) by closing long-recognized legal and enforcement gaps. When properly scoped, the DFA can simplify the regulatory landscape, reduce fragmentation, and enhance legal certainty for innovators, while also enabling consumers to exercise their choices online and bolster overall consumer protection. Ensuring effective consumer choice is at the heart of contestable markets, encouraging innovation and new entry.

Policy recommendations

1. Recognize and outlaw harmful design practices at the interface and system levels.

  • Update existing rules to ensure that manipulative and deceptive patterns at both interface and system architecture levels are explicitly banned.
  • Extend protection beyond “dark patterns” to include AI-driven and agentic systems that steer users toward outcomes they did not freely choose.
  • Introduce anti-circumvention and burden-shifting provisions requiring platforms to demonstrate the fairness of their design and user-interaction systems.
  • Harmonize key definitions and obligations across the different legislative instruments within consumer, competition, and data protection law.

2. Establish substantive fairness standards for personalization and online advertising.

  • Prohibit exploitative or manipulative personalization based on sensitive data or vulnerabilities.
  • Guarantee simple, meaningful opt-outs that do not degrade service quality.
  • Require the use of privacy-preserving technologies (PETs) and data minimisation by design in all personalization systems.
  • Mandate regular audits to assess fairness and detect systemic bias or manipulation across the ad-tech chain.

3. Strengthen centralized enforcement and cooperation across regulators. 

  • Adopt the DFA as a Regulation and introduce centralized enforcement to ensure consistent application across Member States.
  • Create formal mechanisms for cross-regulator coordination among consumer, data protection, and competition authorities.
  • Update the “average consumer” standard to reflect real behavioral dynamics online, ensuring protection for all users, not just the hypothetical rational actor.

A strong, harmonized DFA would modernize Europe’s consumer protection architecture, strengthen trust, and promote a fairer, more competitive digital economy. By closing long-recognized legal gaps, it would reinforce genuine user choice, simplify compliance, enhance legal certainty, and support responsible innovation.

You can read our position in more detail here.

The post Pathways to a fairer digital world: Mozilla shares views on the EU Digital Fairness Act appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogAnnouncing Rust 1.91.0

The Rust team is happy to announce a new version of Rust, 1.91.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.91.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.91.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.91.0 stable

aarch64-pc-windows-msvc is now a Tier 1 platform

The Rust compiler supports a wide variety of targets, but the Rust Team can't provide the same level of support for all of them. To clearly mark how supported each target is, we use a tiering system:

  • Tier 3 targets are technically supported by the compiler, but we don't check whether their code build or passes the tests, and we don't provide any prebuilt binaries as part of our releases.
  • Tier 2 targets are guaranteed to build and we provide prebuilt binaries, but we don't execute the test suite on those platforms: the produced binaries might not work or might have bugs.
  • Tier 1 targets provide the highest support guarantee, and we run the full suite on those platforms for every change merged in the compiler. Prebuilt binaries are also available.

Rust 1.91.0 promotes the aarch64-pc-windows-msvc target to Tier 1 support, bringing our highest guarantees to users of 64-bit ARM systems running Windows.

Add lint against dangling raw pointers from local variables

While Rust's borrow checking prevents dangling references from being returned, it doesn't track raw pointers. With this release, we are adding a warn-by-default lint on raw pointers to local variables being returned from functions. For example, code like this:

fn f() -> *const u8 {
    let x = 0;
    &x
}

will now produce a lint:

warning: a dangling pointer will be produced because the local variable `x` will be dropped
 --> src/lib.rs:3:5
  |
1 | fn f() -> *const u8 {
  |           --------- return type of the function is `*const u8`
2 |     let x = 0;
  |         - `x` is part the function and will be dropped at the end of the function
3 |     &x
  |     ^^
  |
  = note: pointers do not have a lifetime; after returning, the `u8` will be deallocated
    at the end of the function because nothing is referencing it as far as the type system is
    concerned
  = note: `#[warn(dangling_pointers_from_locals)]` on by default

Note that the code above is not unsafe, as it itself doesn't perform any dangerous operations. Only dereferencing the raw pointer after the function returns would be unsafe. We expect future releases of Rust to add more functionality helping authors to safely interact with raw pointers, and with unsafe code more generally.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Platform Support

Refer to Rust’s platform support page for more information on Rust’s tiered platform support.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.91.0

Many people came together to create Rust 1.91.0. We couldn't have done it without all of you. Thanks!

Mozilla Privacy BlogCalifornia’s Opt Me Out Act is a Win for Privacy

It’s no secret that privacy and user empowerment have always been core to Mozilla’s mission.

Over the years, we’ve consistently engaged with policymakers to advance strong privacy protections. We were thrilled when the California Consumer Privacy Act (CCPA) was signed into law, giving people the ability to opt-out and send a clear signal to websites that they don’t want their personal data tracked or sold. Despite this progress, many browsers and operating systems still failed to make these controls available or offer the tools to do so without third-party support. This gap is why we’ve pushed time and time again for additional legislation to ensure people can easily exercise their privacy rights online.

Last year, we shared our disappointment when California’s AB 3048 was not signed into law. This bill was a meaningful step toward empowering consumers. When it failed to pass, we urged policymakers to continue efforts to advance similar legislation, to close gaps and strengthen enforcement.

We can’t stress this enough: Legislation must prioritize people’s privacy and meet the expectations that consumers rightly have about treatment of their sensitive personal information.

That’s why we joined allies to support AB 566, the California Opt Me Out Act, mandating that browsers include an opt-out setting so Californians can easily communicate their privacy preferences. Earlier this month, we were happy to see it pass and Governor Newsom sign it into law.

Mozilla has long advocated for easily accessible universal opt-out mechanisms; it’s a core feature built into Firefox through our Global Privacy Control (GPC) mechanism. By requiring browsers to provide tools like GPC, California is setting an important precedent that brings us closer to a web where privacy controls are consistent, effective, and easy to use.

We hope to see similar steps in other states and at the federal level, to advance meaningful privacy protections for everyone online – the issue is more urgent than ever. We remain committed to working alongside policymakers across the board to ensure it happens.

The post California’s Opt Me Out Act is a Win for Privacy appeared first on Open Policy & Advocacy.

Mozilla Addons BlogNew Recommended Extensions arrived, thanks to our community curators

Every so often we host community-driven curatorial projects to select new Firefox Recommended Extensions. By gathering a diverse group of community contributors who share a passion for the open web and add-ons, we aim to identify new Recommended Extensions that meet Mozilla’s “highest standards of security, functionality, and user experience.”

Earlier this year we concluded yet another successful curatorial project spanning six months. We evaluated dozens of worthy nominations. Those that received highest marks for functionality and user experience were then put through a technical review process to ensure they adhere to Add-on Policies and our industry-leading security standards. A few candidates are still working their way through the final stages of review, but most of the new batch of Recommended Extensions are now live on AMO (addons.mozilla.org) and we wanted to share the news, so without further ado here are some exciting new additions to the program…

Yomitan is a dictionary extension uniquely suited for learning new languages (20+). An interactive pop-up provides not only word definitions but audio pronunciation guidance as well, plus other great features tailored for understanding foreign languages.

Power Thesaurus is another elite language tool that provides a vast world of synonyms just a mouse click away (antonyms too!).

Power Thesaurus brings a world of words into Firefox.

PhotoShow is a fabulous tool for any photophile. Just hover over images to instantly enlarge their appearance with an option to download in high-def. Works with 300+ top websites.

Simple Gesture for Android provides a suite of touch gestures like page scrolling, back and forth navigation, tab management, and more.

Immersive Translate is a feature-packed translation extension. Highlights include translations across mediums like web, PDF, eBooks, even video subtitles. Works great on both Firefox desktop and Android.

Time Tracker offers key insights into your web habits. Track the time you spend on websites — with an option to block specific sites if you find they’re stealing too much of your time.

Checker Plus for Gmail makes it easy to stay on top of your Gmail straight from Firefox’s toolbar. See email notifications, read, reply, delete, mark as read and more — without clicking away from wherever you are on the web.

YouTube Search Fixer de-clutters the YouTube experience by removing distracting features like Related Videos, For You, People Also Watched, Shorts — all that stuff intended to rabbit hole your attention. It’s completely customizable, so you’re free to tweak YouTube to taste.

YouTube Search Fixer puts you in control of what you see.

Notefox lets you leave notes to yourself on any website (per page or domain wide). It’s a simple, ideal tool for deep researchers or anyone who needs to leave themselves helpful notes around the web.

Sink It for Reddit features a bunch of “quality of life improvements” as its developer puts it, including color coded comments, content muting, adaptive dark mode, and more.

Raindrop.io helps you save and organize anything you find on the web. This is a tremendous tool for clipping articles, videos, even PDFs — and categorizing them by topic.

Show Video Controls for Firefox is a beloved feature for watchers of WebM formatted videos. The extension automatically enables video controls (volume/mute, play/pause, full screen, etc.).

Chrome Mask is a clever little extension designed to “mask” Firefox as the Chrome browser to websites that otherwise try to block or don’t want to support Firefox.

Congratulations to all of the developers! You’ve built incredible features that will be appreciated by millions of Firefox users.

Finally, a huge thank you to the Firefox Recommended Extensions Advisory Board who contributed their time and talent helping curate all these new Recommended extensions. Shout outs to Amber Shumaker, C. Liam Brown, Cody Ortt, Danny Colin, gsakel, Lewis, Michael Soh, Paul, Rafi Meher, and Rusty (Rusty Zone on YouTube).

We’re planning another curatorial project sometime in 2026, so if you’re the developer of a Firefox extension you believe meets the criteria to become a Recommended extension, or you’re the user of an extension you feel deserves consideration for the program, please email us nominations at amo-featured [at] mozilla [dot] org.

The post New Recommended Extensions arrived, thanks to our community curators appeared first on Mozilla Add-ons Community Blog.

Mozilla ThunderbirdMobile Progress Report: September-October 2025

A Brief Self-Introduction

Hello community, it’s a pleasure to be here and help take part in a product I’ve used for many years, but now with the focus on Mobile.  I am Jon Bott, and am the new Engineering Manager for the Thunderbird Mobile teams.  I am passionate about native mobile development and am excited to be helping both mobile apps moving forward.  

Refining our Roadmaps

For now, as we develop, we are refining the roadmap and making more concrete plans for iOS Thunderbird’s Alpha release in a couple of months, and finalizing our initial pass with Account Drawer on the Android (planned for release in the next beta).  We also have Notification and Message List improvements under development.

Carpaccio

As a mobile product, we’ve gone through several changes over the last year or so, from large annual releases, to our more recent monthly beta and release process.  Our next steps are to start sizing our features so they fit better into that monthly cadence, and you’ll see the benefits of this over the next few months as we simplify our planning & process – breaking our large features into smaller, more frequently delivered pieces.  This is based on the Carpaccio method for breaking down features into thin slices with the goal of delivering usable features to our users more quickly, and focusing more on the iterative process helping us take feedback sooner from the community on a feature experience and designs.  Not everything will fit in this, of course, but more will go out sooner as we carve away with our larger goals for the platforms.

Stay Tuned

Over the next few weeks we’ll update our timelines and roadmaps, to what pieces we have high confidence in delivering over the next few months, and a 50,000 foot (15,000 meter) view of our larger pieces we hope to tackle in the next year.  Ultimately our goal is to more quickly reduce pain points you might have, and keep adding polish to Thunderbird’s mobile experience. 

Progress with Thunderbird iOS

We are excited to show the progress we are making in getting the iOS up and running.  Some things are connected, others have sample data for now, but it helps us move quickly and start to share what the UI will be like moving forward.  Here are the actual screen we’ve coded up:

____

Jon Bott

Manager, Mobile Apps

The post Mobile Progress Report: September-October 2025 appeared first on The Thunderbird Blog.

Spidermonkey Development BlogWho needs Graphviz when you can build it yourself?

We recently overhauled our internal tools for visualizing the compilation of JavaScript and WebAssembly. When SpiderMonkey’s optimizing compiler, Ion, is active, we can now produce interactive graphs showing exactly how functions are processed and optimized.

You can play with these graphs right here on this page. Simply write some JavaScript code in the test function and see what graph is produced. You can click and drag to navigate, ctrl-scroll to zoom, and drag the slider at the bottom to scrub through the optimization process.

As you experiment, take note of how stable the graph layout is, even as the sizes of blocks change or new structures are added. Try clicking a block's title to select it, then drag the slider and watch the graph change while the block remains in place. Or, click an instruction's number to highlight it so you can keep an eye on it across passes.

 

Example iongraph output

We are not the first to visualize our compiler’s internal graphs, of course, nor the first to make them interactive. But I was not satisfied with the output of common tools like Graphviz or Mermaid, so I decided to create a layout algorithm specifically tailored to our needs. The resulting algorithm is simple, fast, produces surprisingly high-quality output, and can be implemented in less than a thousand lines of code. The purpose of this article is to walk you through this algorithm and the design concepts behind it.

Read this post on desktop to see an interactive demo of iongraph.

Background

As readers of this blog already know, SpiderMonkey has several tiers of execution for JavaScript and WebAssembly code. The highest tier is known as Ion, an optimizing SSA compiler that takes the most time to compile but produces the highest-quality output.

Working with Ion frequently requires us to visualize and debug the SSA graph. Since 2011 we have used a tool for this purpose called iongraph, built by Sean Stangl. It is a simple Python script that takes a JSON dump of our compiler graphs and uses Graphviz to produce a PDF. It is perfectly adequate, and very much the status quo for compiler authors, but unfortunately the Graphviz output has many problems that make our work tedious and frustrating.

The first problem is that the Graphviz output rarely bears any resemblance to the source code that produced it. Graphviz will place nodes wherever it feels will minimize error, resulting in a graph that snakes left and right seemingly at random. There is no visual intuition for how deeply nested a block of code is, nor is it easy to determine which blocks are inside or outside of loops. Consider the following function, and its Graphviz graph:

function foo(n) {
  let result = 0;
  for (let i = 0; i < n; i++) {
    if (!!(i % 2)) {
      result = 0x600DBEEF;
    } else {
      result = 0xBADBEEF;
    }
  }

  return result;
}

Counterintuitively, the return appears before the two assignments in the body of the loop. Since this graph mirrors JavaScript control flow, we’d expect to see the return at the bottom. This problem only gets worse as graphs grow larger and more complex.

The second, related problem is that Graphviz’s output is unstable. Small changes to the input can result in large changes to the output. As you page through the graphs of each pass within Ion, nodes will jump left and right, true and false branches will swap, loops will run up the right side instead of the left, and so on. This makes it very hard to understand the actual effect of any given pass. Consider the following before and after, and notice how the second graph is almost—but not quite—a mirror image of the first, despite very minimal changes to the graph’s structure:

None of this felt right to me. Control flow graphs should be able to follow the structure of the program that produced them. After all, a control flow graph has many restrictions that a general-purpose tool would not be aware of: they have very few cycles, all of which are well-defined because they come from loops; furthermore, both JavaScript and WebAssembly have reducible control flow, meaning all loops have only one entry, and it is not possible to jump directly into the middle of a loop. This information could be used to our advantage.

Beyond that, a static PDF is far from ideal when exploring complicated graphs. Finding the inputs or uses of a given instruction is a tedious and frustrating exercise, as is following arrows from block to block. Even just zooming in and out is difficult. I eventually concluded that we ought to just build an interactive tool to overcome these limitations.

How hard could layout be?

I had one false start with graph layout, with an algorithm that attempted to sort blocks into vertical “tracks”. This broke down quickly on a variety of programs and I was forced to go back to the drawing board—in fact, back to the source of the very tool I was trying to replace.

The algorithm used by dot, the typical hierarchical layout mode for Graphviz, is known as the Sugiyama layout algorithm, from a 1981 paper by Sugiyama et al. As introduction, I found a short series of lectures that broke down the Sugiyama algorithm into 5 steps:

  1. Cycle breaking, where the direction of some edges are flipped in order to produce a DAG.
  2. Leveling, where vertices are assigned into horizontal layers according to their depth in the graph, and dummy vertices are added to any edge that crosses multiple layers.
  3. Crossing minimization, where vertices on a layer are reordered in order to minimize the number of edge crossings.
  4. Vertex positioning, where vertices are horizontally positioned in order to make the edges as straight as possible.
  5. Drawing, where the final graph is rendered to the screen.

A screenshot from the lectures, showing the five steps above

These steps struck me as surprisingly straightforward, and provided useful opportunities to insert our own knowledge of the problem:

  • Cycle breaking would be trivial for us, since the only cycles in our data are loops, and loop backedges are explicitly labeled. We could simply ignore backedges when laying out the graph.
  • Leveling would be straightforward, and could easily be modified to better mimic the source code. Specifically, any blocks coming after a loop in the source code could be artificially pushed down in the layout, solving the confusing early-exit problem.
  • Permuting vertices to reduce edge crossings was actually just a bad idea, since our goal was stability from graph to graph. The true and false branches of a condition should always appear in the same order, for example, and a few edge crossings is a small price to pay for this stability.
  • Since reducible control flow ensures that a program’s loops form a tree, vertex positioning could ensure that loops are always well-nested in the final graph.

Taken all together, these simplifications resulted in a remarkably straightforward algorithm, with the initial implementation being just 1000 lines of JavaScript. (See this demo for what it looked like at the time.) It also proved to be very efficient, since it avoided the most computationally complex parts of the Sugiyama algorithm.

iongraph from start to finish

We will now go through the entire iongraph layout algorithm. Each section contains explanatory diagrams, in which rectangles are basic blocks and circles are dummy nodes. Loop header blocks (the single entry point to each loop) are additionally colored green.

Be aware that the block positions in these diagrams are not representative of the actual computed layout position at each point in the process. For example, vertical positions are not calculated until the very end, but it would be hard to communicate what the algorithm was doing if all blocks were drawn on a single line!

Step 1: Layering

We first sort the basic blocks into horizontal tracks called “layers”. This is very simple; we just start at layer 0 and recursively walk the graph, incrementing the layer number as we go. As we go, we track the “height” of each loop, not in pixels, but in layers.

We also take this opportunity to vertically position nodes “inside” and “outside” of loops. Whenever we see an edge that exits a loop, we defer the layering of the destination block until we are done layering the loop contents, at which point we know the loop’s height.

A note on implementation: nodes are visited multiple times throughout the process, not just once. This can produce a quadratic explosion for large graphs, but I find that an early-out is sufficient to avoid this problem in practice.

The animation below shows the layering algorithm in action. Notice how the final block in the graph is visited twice, once after each loop that branches to it, and in each case, the block is deferred until the entire loop has been layered, rather than processed immediately after its predecessor block. The final position of the block is below the entirety of both loops, rather than directly below one of its predecessors as Graphviz would do. (Remember, horizontal and vertical positions have not yet been computed; the positions of the blocks in this diagram are hardcoded for demonstration purposes.)

Implementation pseudocode
/*CODEBLOCK=layering*/function layerBlock(block, layer = 0) {
  // Omitted for clarity: special handling of our "backedge blocks"

  // Early out if the block would not be updated
  if (layer <= block.layer) {
    return;
  }

  // Update the layer of the current block
  block.layer = Math.max(block.layer, layer);

  // Update the heights of all loops containing the current block
  let header = block.loopHeader;
  while (header) {
    header.loopHeight = Math.max(header.loopHeight, block.layer - header.layer + 1);
    header = header.parentLoopHeader;
  }

  // Recursively layer successors
  for (const succ of block.successors) {
    if (succ.loopDepth < block.loopDepth) {
      // Outgoing edges from the current loop will be layered later
      block.loopHeader.outgoingEdges.push(succ);
    } else {
      layerBlock(succ, layer + 1);
    }
  }

  // Layer any outgoing edges only after the contents of the loop have
  // been processed
  if (block.isLoopHeader()) {
    for (const succ of block.outgoingEdges) {
      layerBlock(succ, layer + block.loopHeight);
    }
  }
}

Step 2: Create dummy nodes

Any time an edge crosses a layer, we create a dummy node. This allows edges to be routed across layers without overlapping any blocks. Unlike in traditional Sugiyama, we always put downward dummies on the left and upward dummies on the right, producing a consistent “counter-clockwise” flow. This also makes it easy to read long vertical edges, whose direction would otherwise be ambiguous. (Recall how the loop backedge flipped from the right to the left in the “unstable layout” Graphviz example from before.)

In addition, we coalesce any edges that are going to the same destination by merging their dummy nodes. This heavily reduces visual noise.

Step 3: Straighten edges

This is the fuzziest and most ad-hoc part of the process. Basically, we run lots of small passes that walk up and down the graph, aligning layout nodes with each other. Our edge-straightening passes include:

  • Pushing nodes to the right of their loop header to “indent” them.
  • Walking a layer left to right, moving children to the right to line up with their parents. If any nodes overlap as a result, they are pushed further to the right.
  • Walking a layer right to left, moving parents to the right to line up with their children. This version is more conservative and will not move a node if it would overlap with another. This cleans up most issues from the first pass.
  • Straightening runs of dummy nodes so we have clean vertical lines.
  • “Sucking in” dummy runs on the left side of the graph if there is room for them to move to the right.
  • Straighten out any edges that are “nearly straight”, according to a chosen threshold. This makes the graph appear less wobbly. We do this by repeatedly “combing” the graph upward and downward, aligning parents with children, then children with parents, and so on.

It is important to note that dummy nodes participate fully in this system. If for example you have two side-by-side loops, straightening the left loop’s backedge will push the right loop to the side, avoiding overlaps and preserving the graph’s visual structure.

We do not reach a fixed point with this strategy, nor do we attempt to. I find that if you continue to repeatedly apply these particular layout passes, nodes will wander to the right forever. Instead, the layout passes are hand-tuned to produce decent-looking results for most of the graphs we look at on a regular basis. That said, this could certainly be improved, especially for larger graphs which do benefit from more iterations.

At the end of this step, all nodes have a fixed X-coordinate and will not be modified further.

Step 4: Track horizontal edges

Edges may overlap visually as they run horizontally between layers. To resolve this, we sort edges into parallel “tracks”, giving each a vertical offset. After tracking all the edges, we record the total height of the tracks and store it on the preceding layer as its “track height”. This allows us to leave room for the edges in the final layout step.

We first sort edges by their starting position, left to right. This produces a consistent arrangement of edges that has few vertical crossings in practice. Edges are then placed into tracks from the “outside in”, stacking rightward edges on top and leftward edges on the bottom, creating a new track if the edge would overlap with or cross any other edge.

The diagram below is interactive. Click and drag the blocks to see how the horizontal edges get assigned to tracks.

Implementation pseudocode
/*CODEBLOCK=tracks*/function trackHorizontalEdges(layer) {
  const TRACK_SPACING = 20;

  // Gather all edges on the layer, and sort left to right by starting coordinate
  const layerEdges = [];
  for (const node of layer.nodes) {
    for (const edge of node.edges) {
      layerEdges.push(edge);
    }
  }
  layerEdges.sort((a, b) => a.startX - b.startX);

  // Assign edges to "tracks" based on whether they overlap horizontally with
  // each other. We walk the tracks from the outside in and stop if we ever
  // overlap with any other edge.
  const rightwardTracks = []; // [][]Edge
  const leftwardTracks = [];  // [][]Edge
  nextEdge:
  for (const edge of layerEdges) {
    const trackSet = edge.endX - edge.startX >= 0 ? rightwardTracks : leftwardTracks;
    let lastValidTrack = null; // []Edge | null

    // Iterate through the tracks in reverse order (outside in)
    for (let i = trackSet.length - 1; i >= 0; i--) {
      const track = trackSet[i];
      let overlapsWithAnyInThisTrack = false;
      for (const otherEdge of track) {
        if (edge.dst === otherEdge.dst) {
          // Assign the edge to this track to merge arrows
          track.push(edge);
          continue nextEdge;
        }

        const al = Math.min(edge.startX, edge.endX);
        const ar = Math.max(edge.startX, edge.endX);
        const bl = Math.min(otherEdge.startX, otherEdge.endX);
        const br = Math.max(otherEdge.startX, otherEdge.endX);
        const overlaps = ar >= bl && al <= br;
        if (overlaps) {
          overlapsWithAnyInThisTrack = true;
          break;
        }
      }

      if (overlapsWithAnyInThisTrack) {
        break;
      } else {
        lastValidTrack = track;
      }
    }

    if (lastValidTrack) {
      lastValidTrack.push(edge);
    } else {
      trackSet.push([edge]);
    }
  }

  // Use track info to apply offsets to each edge for rendering.
  const tracksHeight = TRACK_SPACING * Math.max(
    0,
    rightwardTracks.length + leftwardTracks.length - 1,
  );
  let trackOffset = -tracksHeight / 2;
  for (const track of [...rightwardTracks.toReversed(), ...leftwardTracks]) {
    for (const edge of track) {
      edge.offset = trackOffset;
    }
    trackOffset += TRACK_SPACING;
  }
}

Step 5: Verticalize

Finally, we assign each node a Y-coordinate. Starting at a Y-coordinate of zero, we iterate through the layers, repeatedly adding the layer’s height and its track height, where the layer height is the maximum height of any node in the layer. All nodes within a layer receive the same Y-coordinate; this is simple and easier to read than Graphviz’s default of vertically centering nodes within a layer.

Now that every node has both an X and Y coordinate, the layout process is complete.

Implementation pseudocode
/*CODEBLOCK=verticalize*/function verticalize(layers) {
  let layerY = 0;
  for (const layer of layers) {
    let layerHeight = 0;
    for (const node of layer.nodes) {
      node.y = layerY;
      layerHeight = Math.max(layerHeight, node.height);
    }
    layerY += layerHeight;
    layerY += layer.trackHeight;
  }
}

Step 6: Render

The details of rendering are out of scope for this article, and depend on the specific application. However, I wish to highlight a stylistic decision that I feel makes our graphs more readable.

When rendering edges, we use a style inspired by railroad diagrams. These have many advantages over the Bézier curves employed by Graphviz. First, straight lines feel more organized and are easier to follow when scrolling up and down. Second, they are easy to route (vertical when crossing layers, horizontal between layers). Third, they are easy to coalesce when they share a destination, and the junctions provide a clear indication of the edge’s direction. Fourth, they always cross at right angles, improving clarity and reducing the need to avoid edge crossings in the first place.

Consider the following example. There are several edge crossings that may traditionally be considered undesirable—yet the edges and their directions remain clear. Of particular note is the vertical junction highlighted in red on the left: not only is it immediately clear that these edges share a destination, but the junction itself signals that the edges are flowing downward. I find this much more pleasant than the “rat’s nest” that Graphviz tends to produce.

Examples of railroad-diagram edges

Why does this work?

It may seem surprising that such a simple (and stupid) layout algorithm could produce such readable graphs, when more sophisticated layout algorithms struggle. However, I feel that the algorithm succeeds because of its simplicity.

Most graph layout algorithms are optimization problems, where error is minimized on some chosen metrics. However, these metrics seem to correlate poorly to readability in practice. For example, it seems good in theory to rearrange nodes to minimize edge crossings. But a predictable order of nodes seems to produce more sensible results overall, and simple rules for edge routing are sufficient to keep things tidy. (As a bonus, this also gives us layout stability from pass to pass.) Similarly, layout rules like “align parents with their children” produce more readable results than “minimize the lengths of edges”.

Furthermore, by rejecting the optimization problem, a human author gains more control over the layout. We are able to position nodes “inside” of loops, and push post-loop content down in the graph, because we reject this global constraint-solver approach. Minimizing “error” is meaningless compared to a human maximizing meaning through thoughtful design.

And finally, the resulting algorithm is simply more efficient. All the layout passes in iongraph are easy to program and scale gracefully to large graphs because they run in roughly linear time. It is better, in my view, to run a fixed number of layout iterations according to your graph complexity and time budget, rather than to run a complex constraint solver until it is “done”.

By following this philosophy, even the worst graphs become tractable. Below is a screenshot of a zlib function, compiled to WebAssembly, and rendered using the old tool.

spaghetti nightmare!!

It took about ten minutes for Graphviz to produce this spaghetti nightmare. By comparison, iongraph can now lay out this function in 20 milliseconds. The result is still not particularly beautiful, but it renders thousands of times faster and is much easier to navigate.

better spaghetti

Perhaps programmers ought to put less trust into magic optimizing systems, especially when a human-friendly result is the goal. Simple (and stupid) algorithms can be very effective when applied with discretion and taste.

Future work

We have already integrated iongraph into the Firefox profiler, making it easy for us to view the graphs of the most expensive or impactful functions we find in our performance work. Unfortunately, this is only available in specific builds of the SpiderMonkey shell, and is not available in full browser builds. This is due to architectural differences in how profiling data is captured and the flags with which the browser and shell are built. I would love for Firefox users to someday be able to view these graphs themselves, but at the moment we have no plans to expose this to the browser. However, one bug tracking some related work can be found here.

We will continue to sporadically update iongraph with more features to aid us in our work. We have several ideas for new features, including richer navigation, search, and visualization of register allocation info. However, we have no explicit roadmap for when these features may be released.

To experiment with iongraph locally, you can run a debug build of the SpiderMonkey shell with IONFLAGS=logs; this will dump information to /tmp/ion.json. This file can then be loaded into the standalone deployment of iongraph. Please be aware that the user experience is rough and unpolished in its current state.

The source code for iongraph can be found on GitHub. If this subject interests you, we would welcome contributions to iongraph and its integration into the browser. The best place to reach us is our Matrix chat.


Thanks to Matthew Gaudet, Asaf Gartner, and Colin Davidson for their feedback on this article.

Will Kahn-GreeneOpen Source Project Maintenance 2025

Every October, I do a maintenance pass on all my projects. At a minimum, that involves dropping support for whatever Python version is no longer supported and adding support for the most recently released Python version. While doing that, I go through the issue tracker, answer questions, and fix whatever I can fix. Then I release new versions. Then I think about which projects I should deprecate and figure out a deprecation plan for them.

This post covers the 2025 round.

TL;DR

Read more… (7 min remaining to read)

Mozilla Attack & DefenseFirefox Security & Privacy Newsletter 2025 Q3

Welcome to the Q3 2025 edition of the Firefox Security and Privacy newsletter!

Security and Privacy on the web are the cornerstones of Mozilla’s manifesto, and they influence how we operate and build our products. Following are the highlights of our work from Q3 2025, grouped into the following categories:

  • Firefox Product Security & Privacy, showcasing new Security & Privacy Features and Integrations in Firefox.
  • Firefox for Enterprise, highlighting security & privacy updates for administrative features, like Enterprise policies.
  • Core Security, outlining Security and Hardening efforts within the Firefox Platform.
  • Web Security and Standards, allowing websites to better protect themselves against online threats.

Preface

Note: Some of the bugs linked below might not be accessible to the general public and restricted to specific work groups. We de-restrict fixed security bugs after a grace-period, until the majority of our user population have received Firefox updates. If a link does not work for you, please accept this as a precaution for the safety of all Firefox users.

Firefox Product Security & Privacy

  • As a follow-up to our last newsletter, Firefox has won a “Speedrunner” Award by the TrendMicro Zero Day Initiative for being consistently fast to patch security vulnerabilities. This is the second consecutive year, in which Firefox is recognized for the speedy delivery of security updates.
  • Protecting against Fingerprinting-based tracking: With Firefox 143, we’ve introduced new defenses against online fingerprinting. Our analysis of the most frequently exploited user data shows that it’s possible to significantly lower the success rate of fingerprinting attacks, without compromising a user’s browsing experience. Specifically, Firefox now standardizes how it reports device attributes such as CPU core count, screen size, and touch input capabilities. By unifying these values across our entire user base, we cut the share of Firefox users who appear unique to fingerprinting scripts from roughly 35% to just 20%.
  • Strict Tracking Protection with web compatibility in mind: When users set Firefox’s tracking protection to strict, we already warn them that stricter blocking may result in missing content or broken websites. As of Firefox 142, we are providing a list of exceptions that may help unbreak popular websites without compromising the protection. The list of exceptions is transparently shared on https://etp-exceptions.mozilla.org/.
  • DoH on Android: We have landed opt-in support for DoH Android in Firefox 143. Opt-in available in Firefox preferences UI, Firefox Android users can enable DoH with Increased or Max Protection settings to prevent network observers from tracking their browsing behaviour.
  • Improved TLS Error Pages: We improved non-overridable TLS error pages to provide more context for end users. Starting in Fx140, Firefox contains more information on why a connection was blocked, highlighting that Firefox is not causing the problem but rather that the website has a security problem and Firefox is actually keeping the user safe.
  • SafeBrowsing v5: Firefox Nightly now supports the SafeBrowsing v5 protocol, which protects against threats like phishing or malware sites, in preparation for the upcoming decommissioning of SafeBrowsing v4 server.
  • Private Downloads in Private Browsing: When downloading a file in Private Browsing mode, Firefox 143 now asks whether to keep or delete the files after that session ends. You can adjust this behavior in Settings, if desired.
  • Improved Video sharing: As of Firefox 143, the browser permission dialog will now show a preview of the selected Video camera, making it much easier to see and decide what is being shared before providing camera permissions to a page.

Firefox for Enterprise

  • Updated Enterprise Policy for Tracking Protection: The EnableTrackingProtection policy has been updated to allow you to set the category to either strict or standard. When the category is set using this policy, the user cannot change it. The EnableTrackingProtection policy has also been updated to allow you to set control Suspected fingerprinters. For more information, see this SUMO page.
  • Improved Control over SVG, MathML, WebGL, CSP reporting and Fingerprinting Protection: The Preferences policy has been updated to allow setting the preferences mathml.disabled, svg.context-properties.content.enabled, svg.disabled, webgl.disabled, webgl.force-enabled, xpinstall.enabled, and security.csp.reporting.enabled as well as prefs beginning with privacy.baselineFingerprintingProtection or privacy.fingerprintingProtection.

Core Security

  • CRLite on Desktop and Mobile: CRLite is a faster, more reliable and privacy-protecting certificate revocation check mechanism, as compared to the traditional OCSP (Online Certificate Status Protocol). CRLite is available in Desktop versions since Firefox 142 and on Firefox for Android in Firefox 145. Read details on CRLite in the blogpost: CRLite: Fast, private, and comprehensive certificate revocation checking in Firefox.
  • Supporting Certificate Compression in QUIC: Certificate compression reduces the size of certificate chains during a Transport Layer Security (TLS) handshake, which improves performance by lowering latency and bandwidth consumption. The three compression algorithms zlib, brotli, and zstd are available in QUIC starting with Firefox 143.

Web Security & Standards

  • Improved Cache removal: When a website uses the "cache" directive of the Clear-Site-Data response header, Firefox 141 now also clears the backwards-forwards cache (bfcache). This allows a site to ensure that private session details can be removed, even if a user uses the browser back button. (bug 1930501).
  • Easy URL Pattern Matching: The URL Pattern API is fully supported as of Firefox 142, enabling you to match and parse URLs using a standardized pattern syntax. (bug 1731418).

Going Forward

As a Firefox user, you will automatically benefit from all the mentioned security and privacy benefits with the enabled auto-updates in Firefox. If you aren’t a Firefox user yet, you can download Firefox to experience a fast and safe browsing experience while supporting Mozilla’s mission of a healthy, safe and accessible web for everyone.

Thanks to everyone who helps make Firefox and the open web more secure and privacy-respecting.

See you next time with the Q4 2025 Report!
- Firefox Security and Privacy Teams.

The Rust Programming Language BlogProject goals for 2025H2

On Sep 9, we merged RFC 3849, declaring our goals for the "second half" of 2025H2 -- well, the last 3 months, at least, since "yours truly" ran a bit behind getting the goals program organized.

Flagship themes

In prior goals programs, we had a few major flagship goals, but since many of these goals were multi-year programs, it was hard to see what progress had been made. This time we decided to organize things a bit differently. We established four flagship themes, each of which covers a number of more specific goals. These themes cover the goals we expect to be the most impactful and constitute our major focus as a Project for the remainder of the year. The four themes identified in the RFC are as follows:

  • Beyond the &, making it possible to create user-defined smart pointers that are as ergonomic as Rust's built-in references &.
  • Unblocking dormant traits, extending the core capabilities of Rust's trait system to unblock long-desired features for language interop, lending iteration, and more.
  • Flexible, fast(er) compilation, making it faster to build Rust programs and improving support for specialized build scenarios like embedded usage and sanitizers.
  • Higher-level Rust, making higher-level usage patterns in Rust easier.
"Beyond the &"
GoalPoint of contactTeam(s) and Champion(s)
Reborrow traitsAapo Alasuutaricompiler (Oliver Scherer), lang (Tyler Mandry)
Design a language feature to solve Field ProjectionsBenno Lossinlang (Tyler Mandry)
Continue Experimentation with Pin ErgonomicsFrank Kingcompiler (Oliver Scherer), lang (TC)

One of Rust's core value propositions is that it's a "library-based language"—libraries can build abstractions that feel built-in to the language even when they're not. Smart pointer types like Rc and Arc are prime examples, implemented purely in the standard library yet feeling like native language features. However, Rust's built-in reference types (&T and &mut T) have special capabilities that user-defined smart pointers cannot replicate. This creates a "second-class citizen" problem where custom pointer types can't provide the same ergonomic experience as built-in references.

The "Beyond the &" initiative aims to share the special capabilities of &, allowing library authors to create smart pointers that are truly indistinguishable from built-in references in terms of syntax and ergonomics. This will enable more ergonomic smart pointers for use in cross-language interop (e.g., references to objects in other languages like C++ or Python) and for low-level projects like Rust for Linux that use smart pointers to express particular data structures.

"Unblocking dormant traits"
GoalPoint of contactTeam(s) and Champion(s)
Evolving trait hierarchiesTaylor Cramercompiler, lang (Taylor Cramer), libs-api, types (Oliver Scherer)
In-place initializationAlice Ryhllang (Taylor Cramer)
Next-generation trait solverlcnrtypes (lcnr)
Stabilizable Polonius support on nightlyRémy Rakictypes (Jack Huey)
SVE and SME on AArch64David Woodcompiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras), types

Rust's trait system is one of its most powerful features, but it has a number of longstanding limitations that are preventing us from adopting new patterns. The goals in this category unblock a number of new capabilities:

  • Polonius will enable new borrowing patterns, and in particular unblock "lending iterators". Over the last few goal periods, we have identified an "alpha" version of Polonius that addresses the most important cases while being relatively simple and optimizable. Our goal for 2025H2 is to implement this algorithm in a form that is ready for stabilization in 2026.
  • The next-generation trait solver is a refactored trait solver that unblocks better support for numerous language features (implied bounds, negative impls, the list goes on) in addition to closing a number of existing bugs and sources of unsoundness. Over the last few goal periods, the trait solver went from being an early prototype to being in production use for coherence checking. The goal for 2025H2 is to prepare it for stabilization.
  • The work on evolving trait hierarchies will make it possible to refactor some parts of an existing trait into a new supertrait so they can be used on their own. This unblocks a number of features where the existing trait is insufficiently general, in particular stabilizing support for custom receiver types, a prior Project goal that wound up blocked on this refactoring. This will also make it safer to provide stable traits in the standard library while preserving the ability to evolve them in the future.
  • The work to expand Rust's Sized hierarchy will permit us to express types that are neither Sized nor ?Sized, such as extern types (which have no size) or Arm's Scalable Vector Extension (which have a size that is known at runtime but not at compilation time). This goal builds on RFC #3729 and RFC #3838, authored in previous Project goal periods.
  • In-place initialization allows creating structs and values that are tied to a particular place in memory. While useful directly for projects doing advanced C interop, it also unblocks expanding dyn Trait to support async fn and -> impl Trait methods, as compiling such methods requires the ability for the callee to return a future whose size is not known to the caller.
"Flexible, fast(er) compilation"
GoalPoint of contactTeam(s) and Champion(s)
build-stdDavid Woodcargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)
Promoting Parallel Front EndSparrow Licompiler
Production-ready cranelift backendFolkert de Vriescompiler, wg-compiler-performance

The "Flexible, fast(er) compilation" initiative focuses on improving Rust's build system to better serve both specialized use cases and everyday development workflows:

"Higher-level Rust"
GoalPoint of contactTeam(s) and Champion(s)
Stabilize cargo-scriptEd Pagecargo (Ed Page), compiler, lang (Josh Triplett), lang-docs (Josh Triplett)
Ergonomic ref-counting: RFC decision and previewNiko Matsakiscompiler (Santiago Pastorino), lang (Niko Matsakis)

People generally start using Rust for foundational use cases, where the requirements for performance or reliability make it an obvious choice. But once they get used to it, they often find themselves turning to Rust even for higher-level use cases, like scripting, web services, or even GUI applications. Rust is often "surprisingly tolerable" for these high-level use cases -- except for some specific pain points that, while they impact everyone using Rust, hit these use cases particularly hard. We plan two flagship goals this period in this area:

  • We aim to stabilize cargo script, a feature that allows single-file Rust programs that embed their dependencies, making it much easier to write small utilities, share code examples, and create reproducible bug reports without the overhead of full Cargo projects.
  • We aim to finalize the design of ergonomic ref-counting and to finalize the experimental impl feature so it is ready for beta testing. Ergonomic ref-counting makes it less cumbersome to work with ref-counted types like Rc and Arc, particularly in closures.

What to expect next

For the remainder of 2025 you can expect monthly blog posts covering the major progress on the Project goals.

Looking at the broader picture, we have now done three iterations of the goals program, and we want to judge how it should be run going forward. To start, Nandini Sharma from CMU has been conducting interviews with various Project members to help us see what's working with the goals program and what could be improved. We expect to spend some time discussing what we should do and to be launching the next iteration of the goals program next year. Whatever form that winds up taking, Tomas Sedovic, the Rust program manager hired by the Leadership Council, will join me in running the program.

Appendix: Full list of Project goals.

Read the full slate of Rust Project goals.

The full slate of Project goals is as follows. These goals all have identified points of contact who will drive the work forward as well as a viable work plan.

Invited goals. Some of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as a point of contact. To find the invited goals, look for the "Help wanted" badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

GoalPoint of contactTeam(s) and Champion(s)
Develop the capabilities to keep the FLS up to datePete LeVasseurbootstrap (Jakub Beránek), lang (Niko Matsakis), opsem, spec (Pete LeVasseur), types
Getting Rust for Linux into stable Rust: compiler featuresTomas Sedoviccompiler (Wesley Wiser)
Getting Rust for Linux into stable Rust: language featuresTomas Sedoviclang (Josh Triplett), lang-docs (TC)
Borrow checking in a-mir-formalityNiko Matsakistypes (Niko Matsakis)
Reborrow traitsAapo Alasuutaricompiler (Oliver Scherer), lang (Tyler Mandry)
build-stdDavid Woodcargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)
Prototype Cargo build analysisWeihang Locargo (Weihang Lo)
Rework Cargo Build Dir LayoutRoss Sullivancargo (Weihang Lo)
Prototype a new set of Cargo "plumbing" commandsHelp Wantedcargo
Stabilize cargo-scriptEd Pagecargo (Ed Page), compiler, lang (Josh Triplett), lang-docs (Josh Triplett)
Continue resolving cargo-semver-checks blockers for merging into cargoPredrag Gruevskicargo (Ed Page), rustdoc (Alona Enraght-Moony)
Emit Retags in CodegenIan McCormackcompiler (Ralf Jung), opsem (Ralf Jung)
Comprehensive niche checks for RustBastian Kerstingcompiler (Ben Kimock), opsem (Ben Kimock)
Const GenericsBoxylang (Niko Matsakis)
Ergonomic ref-counting: RFC decision and previewNiko Matsakiscompiler (Santiago Pastorino), lang (Niko Matsakis)
Evolving trait hierarchiesTaylor Cramercompiler, lang (Taylor Cramer), libs-api, types (Oliver Scherer)
Design a language feature to solve Field ProjectionsBenno Lossinlang (Tyler Mandry)
Finish the std::offload moduleManuel Drehwaldcompiler (Manuel Drehwald), lang (TC)
Run more tests for GCC backend in the Rust's CIGuillaume Gomezcompiler (Wesley Wiser), infra (Marco Ieni)
In-place initializationAlice Ryhllang (Taylor Cramer)
C++/Rust Interop Problem Space MappingJon Baumancompiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay), opsem
Finish the libtest json output experimentEd Pagecargo (Ed Page), libs-api, testing-devex
MIR move eliminationAmanieu d'Antrascompiler, lang (Amanieu d'Antras), opsem, wg-mir-opt
Next-generation trait solverlcnrtypes (lcnr)
Implement Open API Namespace SupportHelp Wantedcargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)
Promoting Parallel Front EndSparrow Licompiler
Continue Experimentation with Pin ErgonomicsFrank Kingcompiler (Oliver Scherer), lang (TC)
Stabilizable Polonius support on nightlyRémy Rakictypes (Jack Huey)
Production-ready cranelift backendFolkert de Vriescompiler, wg-compiler-performance
Stabilize public/private dependenciesHelp Wantedcargo (Ed Page), compiler
Expand the Rust Reference to specify more aspects of the Rust languageJosh Triplettlang-docs (Josh Triplett), spec (Josh Triplett)
reflection and comptimeOliver Scherercompiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)
Relink don't RebuildJane Lusbycargo, compiler
Rust Vision DocumentNiko Matsakisleadership-council
rustc-perf improvementsJamescompiler, infra
Stabilize rustdoc doc_cfg featureGuillaume Gomezrustdoc (Guillaume Gomez)
Add a team charter for rustdoc teamGuillaume Gomezrustdoc (Guillaume Gomez)
SVE and SME on AArch64David Woodcompiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras), types
Rust Stabilization of MemorySanitizer and ThreadSanitizer SupportJakob Koschelbootstrap, compiler, infra, project-exploit-mitigations
Type System DocumentationBoxytypes (Boxy)
Unsafe FieldsJack Wrenncompiler (Jack Wrenn), lang (Scott McMurray)

The Mozilla BlogBetter search suggestions in Firefox

We’re working on a new feature to display direct results in your address bar as you type, so that you can skip the results page and get to the right site or answer faster.

Every major browser today supports a feature known as “search suggestions.” As you type in the address bar, your chosen search engine offers real-time suggestions for searches you might want to perform.

A Firefox browser window with a gray gradient background. The Google search bar shows “mozilla.” Google suggestions below include “mozilla firefox,” “mozilla thunderbird,” “mozilla careers,” “mozilla vpn,” and “mozilla foundation.”

This is a helpful feature, but these suggestions always take you to a search engine results page, not necessarily the information or website you’re ultimately looking for. This is ideal for the search provider, but not always best for the user.

For example, flight status summaries on a search results page are convenient, but it would be more convenient to show that information directly in the address bar:

A Firefox browser window with an orange gradient background. The Google search bar shows “ac 8170.” The result displays an Air Canada flight from Victoria (YYJ) to Vancouver (YVR), showing departure and arrival times and that it’s “In flight” or “On time.”

Similarly, people commonly search for a website when they don’t know or remember the exact URL. Why not skip the search?

A Firefox browser window with a green gradient background. The Google search bar shows “mdn.” Below, the top result is “Mozilla Developer Network — Your blueprint for a better internet,” with Google suggestions like “mdn web docs,” “mdn array,” and “mdn fetch.”

Another common use case is searching for recommendations, where Firefox can show highly relevant results from sources around the web:

A Firefox browser window with a gradient pink-to-purple background. The Google search bar shows the query “bike repair boston.” Below it, Google suggestions and a featured result for “Ballantine Bike Shop” appear, showing address, rating, and hours.

The truth is, browser address bars today are largely a conduit to your search engine. And while search engines are very useful, a single and centralized source for finding everything online is not how we want the web to work. Firefox is proudly independent, and our address bar should be too.

We experimented with the concept several years ago, but didn’t ship it1 because we have an extremely high standard for privacy and weren’t satisfied with any design that would send your raw queries directly to us. Even though these are already sent to your search engine, Firefox is built on the principle that even Mozilla should not be able to learn what you do online. Unlike most search engines, we don’t want to know who’s searching for what, and we want to enable anyone in the world to verify that we couldn’t know even if we tried.

We now have the technical architecture to meet that bar. When Firefox requests suggestions, it encrypts your query using a new protocol we helped design called Oblivious HTTP. The encrypted request goes to a relay operated by Fastly, which can see your IP address but not the text. Mozilla can see the text, but not who it came from. We can then return a result directly or fetch one from a specialized search service. No single party can connect what you type to who you are.

A simple black-and-white diagram with three rounded rectangles labeled “Firefox,” “Relay (Operated by Fastly),” and “Mozilla.” Double arrows connect them, showing a two-way flow between Firefox ↔ Relay ↔ Mozilla.

Firefox will continue to show traditional search suggestions for all queries and add direct results only when we have high confidence they match your intent. As with search engines, some of these results may be sponsored to support Firefox, but only if they’re highly relevant, and neither we nor the sponsor will know who they’re for. We expect this to be useful to users and, hopefully, help level the playing field by allowing Mozilla to work directly with independent sites rather than mediating all web discovery through the search engine.

Running this at scale is not trivial. We need the capacity to handle the volume and servers close to people to avoid introducing noticeable latency. To keep things smooth, we are starting in the United States and will evaluate expanding into other geographies as we learn from this experience and observe how the system performs. The feature is still in development and testing and will roll out gradually over the coming year.2


We did ship an experimental version that users could enable in settings, as well as a small set of locally-matched suggestions in some regions. Unfortunately, the former had too little reach to be worth building features for, and the latter had very poor relevance and utility due to the technical limitations (most notably, the size of the local database).

2 Where the feature is available, you can disable it by unchecking “Retrieve suggestions as you type” in the “Search” pane in Firefox settings. If this box is not yet available in your version of Firefox, you can pre-emptively disable it by setting browser.urlbar.quicksuggest.online.enabled to false in about:config.

Take control of your internet

Download Firefox

The post Better search suggestions in Firefox appeared first on The Mozilla Blog.

Firefox NightlyExtensions UI Improvements and More – These Weeks in Firefox: Issue 191

Highlights

  • As part of improvements to the extensions panel, an empty state UI has been introduced to help users to understand why their installed extensions may not be listed in the panel (e.g. when opening a private browsing window or enabling permanent private browsing mode).
The Firefox Extensions UI panel encouraging users to find more extensions.

Empty state shown when no extensions are currently installed.

The Firefox Extensions panel UI explaining why no extensions are displayed in private browsing mode.

Empty state shown when extensions are already installed but not allowed to access private browsing tabs.

A Firefox extension popup during the installation process with a checkbox enabled for the option "Allow extension to run in private windows"

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Khalid AlHaddad
  • Kyler Riggs [:kylr]
  • Michael van Straten [:michael]
  • Pier Angelo Vendrame

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons

WebExtension APIs
  • Thanks to the enhancement contributed by Jim Gong, starting from Firefox 146 the browsingData.remove API will also allow extensions to clear the sessionStorage WebAPI data – Bug 1886894
  • Valentin Gosu introduced masque proxy support to the WebExtensions proxy API in Firefox 145 – Bug 1988988
  • Investigated and fixed a crash triggered by storing deeply nested JSON data in the storage.sync WebExtensions API backend (introduced in Firefox 135 as a side-effect of changes introduced on the storage.sync backend side by Bug 1888472), fix landed in Firefox 145 and has been uplifted to Firefox 144 beta, Firefox 143.0.3 release and Firefox ESR 140.0.3 – Bug 1989840
  • Landed new Glean probe to assess real world impact of the storage.local API IndexedDB corruption issues of the underlying sqlite3 data store (investigated as part of Bug 1979997 and Bug 1885297)
    • NOTE: a new hidden boolean about:config pref extensions.webextensions.keepStorageOnCorrupted.storageLocal which does automatically reset the storage.local IndexedDB database when the Bug 1979997 corruption database issue is detected, and prevents browser.storage.local.clear API calls from failing when Bug 1885297 corrupted key is being hit.
    • NOTE: We intent to keep the auto-reset behaviors disabled by default for a few more nightly cycles to review the new telemetry before enabling the auto-reset behaviors on all channels (follow up tracked by Bug 1992973)

DevTools

Lint, Docs and Workflow

Search and Navigation

  • Address Bar
    • Drew enabled Important Dates feature in Germany, France and Italy for English locales. Bug 1992811
    • Dale made the new redesigned Identity panel show the expected icon for local files. Bug 1989844
    • Dharma landed new search onboarding strings to be used in Nimbus experiments. Bug 1982132
  • Places
  • Search
    • Pier Angelo Vendrame fixed origin attribute use for OpenSearch and engine icons. Bug 1987600, Bug 1993166
    • Florian optimized searchconfig xpcshell tests to use a lot less cpu time.

Mike TaylorA new, new logo for the W3C

In an effort to pivot this site into a full on graphic design side business after 2 blog posts about logos in a row (hit me up exclusivly on FB to request a consultation), I thought I would reveal my new, new logo for the W3C.

It turns out they recently launched a new one, but some folks don’t love it. As an artist, it’s not my job to critique other art, but instead to offer my own compelling vision for the web.

a shitty drawing of a w, the word three spelled out, and followed by a period and the letter c

I shouldn’t have to explain why I went with the classic dark blue and asparagus colors—that much is obvious. And of course, turning c into a file extension as a reminder that NCSA Mosaic was written in C (I didn’t go with WorldWideWeb because that was written in Objective C and .m kinda messes it all up).

Mozilla Localization (L10N)Localizer spotlight: Bogo

About you

My name is Bogomil but people call me Bogo, and I am a translator for the Bulgarian locale. I think I got involved with the Mozilla project back in 2005 when I wrote a small search add-on/script. I became more active around 2008-2009 and with just a few gaps until this day.

I am European. I was born in Bulgaria, but I have been living for a long time in the Czech Republic. Bulgarian is my main language, but sometimes I contribute to localization projects in Turkish, Romanian, Macedonian and Czech.

Q&A

Q: What inspired you to join the Mozilla localization community?

A: As I mentioned here I decided to start localizing software because I knew some people had trouble using it in other languages. I believe everyone deserves the right to use software in a language they understand which helps them to get the maximum value out of it. As for Mozilla in particular I believe in the mission and this is the most efficient way for me to contribute.

Q: How do you solve challenges like bugs or workflow hiccups, especially when collaborating virtually?

A: Since we are a small team for the Bulgarian localizations we are almost always in sync on how to translate the strings. We are following some basic rules, such as using a common dictionary and instructions on how to localize software in Bulgarian (shared across multiple FOSS projects), set 15+ years ago and that are still relevant. When we have a conflict, I usually count on the team managers to share their wisdom, because they have a bit more knowledge than the rest of us.

Q: Which projects or new product features were you most excited about this year, and why?

A: In the last year I contributed mainly to the Thunderbird project. The items that are most exciting to me are:

  • That finally we decided to remove the word “Junk” and replace it with “Spam”, I think this is self-explanatory 🙂
  • The new Account Hub which improves significantly the consumer’s experience and their onboarding into the beautiful world of the free email. Free as in Freedom.
  • I am also excited about all the things in the roadmap to come.

Q: What tips, tools, or habits help you succeed as a localizer?

A: If you look at my Pontoon profile, you will see that for the last 2 months I contributed every day. I find this habit very useful for me, because it keeps me focused on my goal for consistently improving the localized experience.

Another item is that I like to provide a better experience to the mobile users. I often test and fix labels in Thunderbird for Android which, even translated correctly, are too long for a mobile phone UI.

And lastly, I love to engage with the community and ask them for help when we finish a section or a product. Last year we asked the Bulgarian community to help us validate a localization available in the beta version and we got some very helpful feedback.

Something fun

Q: Could you share a few fun or unexpected facts about yourself that people might not know?

  • I ran for the European Parliament in 2009 with the intention to fight for our digital rights.
  • I was on almost every media in the world in 2012 when I bought the data of millions of users for $5! This is the Forbes article.
  • I am a heavy metal fan and you can find me in underground clubs, enjoying bands you have never heard of.
  • Apart from technology I am an artist – I produced and performed my own theater play and shot a movie in Prague.
  • I realized my dream to have an opening talk at FOSDEM. I was opening the Sunday session… but still!

Mozilla ThunderbirdYour Workflow, Supercharged

Extensions make Thunderbird truly yours, moving at your pace and reflecting your priorities. Thunderbird’s flexibility means you can tailor the app to how you actually work. We’ll cover tools for efficiency, consistency, and visibility so every send is faster and better informed, your future self will thank you.

Clippings

We’ve all been there, retyping the same line for the hundredth time and wondering if there’s a better way. Clippings lets you save text once and reuse it anywhere you compose in Thunderbird. You can organize by folders, apply color labels, and search by name with autocomplete, so the right text is always a couple of keystrokes away.

When you paste a clipping, you can include fill‑in prompts for names, dates, or custom notes, and even keep simple HTML formatting and images when needed. It’s like a spellbook for your inbox–summon, swap, send. 

Below is a quick glance at how Clippings can help you: 

  • Save and paste reusable snippets anywhere you write—no more repeat typing.
  • Include prompts for names, dates, or custom notes; HTML and inline images.
  • Organize with folders and labels; find snippets fast with autocomplete.
  • Paste instantly with keyboard shortcuts; import, export, or sync your library.
Link to Thunderbird Add-on library.




With the content process streamlined, now for a sign‑off that keeps your tone on track.

Signature Switch

We rotate hats as we write: buttoned‑up for clients, warm for teammates, and careful punctuation for legal. Signature Switch helps you with that. Keep multiple signatures, and swap them in with a click or shortcut right from the composer. Turn a signature off entirely, pick from your saved set, or append a different one without retyping a thing.

Use plain text for simplicity, or HTML with images and links for a more professional finish. Because everything is accessible while you write, choosing the right signature doesn’t break your flow—and it helps keep branding and tone consistent across messages. One click and your signature goes from handshake to high‑five.

Below is a quick glance at how Signature Switch can help you: 

  • Switch signatures on/off or choose from your saved set, no retyping.
  • Match by recipient, account, or context; keep tone aligned.
  • Use plain text or polished HTML with images and links.
  • Access quickly from the composer toolbar or menu while you write.
Link to Thunderbird Add-on library.




With the sign‑off sorted, now let’s measure the results.

ThirdStats

Looking for a way to interpret email trends on more than just vibes alone? ThirdStats turns your mailbox into clear, local analytics that reveal how your email work actually behaves, when volume spikes, which hours are busiest, how response times trend, and which folders see the most activity. Interactive charts make patterns easy to spot at a glance. 

You can compare accounts side by side, adjust date ranges to see changes over time, and focus on a specific folder for deeper context. All processing happens on your device with read‑only access, so your data isn’t transmitted elsewhere. It’s a simple, private way to understand your workload and time your effort better. 

Below is a quick glance at how ThirdStats can help you: 

  • Visualize volume, peak hours, response times, and folder activity with interactive charts.
  • Compare accounts side by side; filter by date ranges; view by folder.
  • Keep it private: analysis runs locally with read‑only access, no external transmission.
Link to Thunderbird Add-on library.




Do you have a favorite extension? Share it with us in the comments below.

To learn more about add-ons check out Maximize Your Day: Extend Your Productivity with Add-ons.

Your workflow deserves a client that adapts to it. Add what accelerates you, trim the rest, and keep improving. When you’re ready to go further, the Thunderbird Add-ons Catalog is the fastest path to new features. Check what’s popular, discover up‑and‑coming tools, and install directly from the page with built‑in version compatibility checks. Thanks for reading.

The post Your Workflow, Supercharged appeared first on The Thunderbird Blog.

The Servo BlogThis month in Servo: experimental mode, Trusted Types, strokeText(), and more!

September was another busy month for Servo, with a bunch of new features landing in our nightly builds:

servoshell nightly showing new support for the strokeText() method on CanvasRenderingContext2D

servoshell now has a new experimental mode button (☢). Turning on experimental mode has the same effect as running Servo with --enable-experimental-web-platform-features: it enables all engine features, even those that may not be stable or complete. This works much like Chromium’s option with the same name, and it can be useful when a page is not functioning correctly, since it may allow the page to make further progress.

servoshell nightly showing the new experimental mode button (☢), which enables experimental web platform features <figcaption>Top to bottom: experimental mode off, experimental mode on.</figcaption>

Viewport meta tags are now enabled on mobile devices only, fixing a bug where they were enabled on desktop (@shubhamg13, #39133). You can still enable them if needed with --pref viewport_meta_enabled (@shubhamg13, #39207).

Servo now supports Content-Encoding: zstd (@webbeef, #36530), and we’ve fixed a bug causing spurious credentials prompts when a HTTP 401 has no ‘WWW-Authenticate’ header (@simonwuelker, #39215). We’ve also made a bunch of progress on AbortController (@TimvdLippe, #39290, #39295, #39374, #39406) and <link rel=preload> (@TimvdLippe, @jdm, #39033, #39034, #39052, #39146, #39167).

‘Content-Security-Policy: sandbox’ now disables scripting unless ‘allow-scripts’ is given (@TimvdLippe, #39163), and crypto.subtle.exportKey() can now export HMAC keys in raw format (@arihant2math, #39059).

The scrollIntoView() method on Element now works with shadow DOM (@mrobinson, @Loirooriol, #39144), and recurses to parent iframes if they are same origin (@Loirooriol, @mrobinson, #39475, #39397, #39153).

Several types of DOM exceptions can now have error messages (@arihant2math, @rodio, @excitablesnowball, #39056, #39394, #39535), and we’ve also fixed a bug where links often need to be clicked twice (@yezhizhen, #39326), and fixed bugs affecting <img> attribute changes (@tharkum, #39483), the ‘:defined’ selector (@mukilan, #39325, #39390), invertSelf() on DOMMatrix (@lumiscosity, #39113), and the ‘href’ setter on Location (@arihant2math, @sagudev, #39051).

One complex part of Servo isn’t even written in Rust, it’s written in Python! codegen.py, which describes how to generate Rust code with bindings for every known DOM interface from the WebIDL, is now fully type annotated (@jerensl, @mukilan, #39070, #38998).

Embedding and automation

Servo now requires Rust 1.86 to build (@sagudev, #39185).

Keyboard scrolling is now automatically implemented by Servo (@delan, @mrobinson, #39371, #39469), so embedders no longer need to translate arrow keys, Home, End, Page Up, and Page Down to WebView API calls. This change also improves the behaviour of those keys, scrolling the element or <iframe> that was focused or most recently clicked (or a nearby ancestor).

DebugOptions::convert_mouse_to_touch (-Z convert-mouse-to-touch) has been removed (@mrobinson, #39352), with no replacement. Touch event simulation continues to be available in servoshell as --simulate-touch-events.

DebugOptions::webrender_stats (-Z wr-stats in servoshell) has been removed (@mrobinson, #39331); instead call toggle_webrender_debugging(Profiler) on a WebView (or press Ctrl+F12 in servoshell).

DebugOptions::trace_layout (-Z trace-layout) has been removed (@mrobinson, #39332), since it had no effect.

We’ve improved the docs for WebViewDelegate::notify_history_changed (@Narfinger, @mrobinson, @yezhizhen, #39134).

When automating servoshell with WebDriver, commands targeting elements now correctly scroll into view if needed (@PotatoCP, @yezhizhen, #38508, #39265), allowing Element Click, Element Send Keys, Element Clear, and Take Element Screenshot to work properly when the element is outside the viewport.

WebDriver mouse inputs now work correctly with HiDPI scaling on more platforms (@mrobinson, #39472), and we’ve improved the reliability of Take Screenshot, Take Element Screenshot (@yezhizhen, #39499, #39539, #39543), Switch To Frame (@yezhizhen, #39086), Switch To Window (@yezhizhen, #39241), and New Session (@yezhizhen, #39040).

These improvements have enabled us to run the WebDriver conformance tests in CI by default (@PotatoCP, #39087), and also mean we’re closer than ever to running WebDriver-based Web Platform Tests.

servoshell

Favicons now update correctly when you navigate back and forward (@webbeef, #39575), not just when you load a new page.

servoshell’s command line argument parsing has been reworked (@Narfinger, #37194, #39316), which should fix the confusing behaviour of some options.

On mobile devices, servoshell now resizes the webview correctly when the available space changes (@blueguy1, @yjx, @yezhizhen, #39507).

On macOS, telling servoshell to take a screenshot no longer hides the window (@mrobinson, #39500). This does not affect taking a screenshot in headless mode (--headless), where there continues to be no window at all.

Performance

Servo currently runs in single-process mode unless you opt in to --multiprocess mode, and we’ve landed a few perf improvements in that default mode. For one, in single-process mode, script can now communicate with the embedder directly for reduced latency (@jschwe, #39039). We also create one thread pool for the image cache now, rather than one pool per origin (@rodio, #38783).

Many components of Servo that would be separated by a process boundary in multiprocess mode, now use crossbeam channels in single-process mode, rather than using IPC channels in both modes (@jschwe, #39073, #39076, #39345, #39347, #39348, #39074). IPC channels are required when communicating with another process, but they’re more expensive, because they require serialising and deserialising each message, plus resources from the operating system.

We’ve started working on an optimisation for string handling in Servo’s DOM layer (@Narfinger, #39480, #39481, #39504). Strings in our DOM have historically been represented as ordinary Rust strings, but they often come from SpiderMonkey, where they use a variety of representations, none of which are entirely compatible. SpiderMonkey strings would continue to need conversion to Servo strings, but the idea we’re working towards is to make the conversion lazy, in the hope that many strings will never end up being converted at all.

We now use a faster hash algorithm for internal hashmaps that are not security-critical (@Narfinger, #39106, #39166, #39202, #39233, #39244, #39168). These changes also switch that faster algorithm from FNV to an even simpler polynomial hash, following in the footsteps of Rust and Stylo.

We’ve also landed a few more self-contained perf improvements:

Donations

Thanks again for your generous support! We are now receiving 5654 USD/month (+1.8% over August) in recurring donations.

This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo. Keep an eye out for further CI improvements in the coming months, including faster pull request checks and ten-minute WPT builds.

Servo is also on thanks.dev, and already 28 GitHub users (±13 from August) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

5654 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

Conference talks

MiniApps Design and Servo (starting at ~2:37:00; slides) — Gregory Terzian (@gterzian) spoke about how Servo can be used as a web engine for mini-app platforms at WebEvolve 2025

独⽴的,轻量级,模块化与并⾏处理架构的Web引擎开发 [zh] / Developing an independent, light-weight, modular and parallel web-engine [en] (starting at ~5:49:00; slides) — Jonathan Schwender (@jschwe) spoke about Servo’s goals and status at WebEvolve 2025

Servo: A new web engine written in Rust* (slides; transcript) — Manuel Rego (@rego) spoke about the Servo project at GOSIM Hangzhou 2025

Driving Innovation with Servo and OpenHarmony: Unified Rendering and WebDriver* (slides) — Jingshi Shangguan & Zhizhen Ye (@yezhizhen) spoke about a new OpenHarmony rendering backend and WebDriver support in Servo at GOSIM Hangzhou 2025

The Joy and Value of Embedded Servo Systems* (slides) — Gregory Terzian (@gterzian) spoke about embedding Servo at GOSIM Hangzhou 2025

A Dive Into the Servo Layout System* (slides) — Martin Robinson (@mrobinson) & Oriol Brufau (@obrufau) spoke about the architecture of Servo’s parallel and incremental layout system at GOSIM Hangzhou 2025

* video coming soon; go to our About page for updates

Mozilla Addons BlogAnnouncing data collection consent changes for new Firefox extensions

As of November 3rd 2025, all new Firefox extensions will be required to specify if they collect or transmit personal data in their manifest.json file using the browser_specific_settings.gecko.data_collection_permissions key. This will apply to new extensions only, and not new versions of existing extensions. Extensions that do not collect or transmit any personal data are required to specify this by setting the none required data collection permission in this property.

This information will then be displayed to the user when they start to install the extension, alongside any permissions it requests.

Screenshot of example Firefox extension installation prompt showing data that the extension collects Screenshot of example Firefox extension installation prompt showing that the extension claims it collects no data

This information will also be displayed on the addons.mozilla.org page, if it is publicly listed, and in the Permissions and Data section of the Firefox about:addons page for that extension. If an extension supports versions of Firefox prior to 140 for Desktop, or 142 for Android, then the developer will need to continue to provide the user with a clear way to control the add-on’s data collection and transmission immediately after installation of the add-on.

Once any extension starts using these data_collection_permissions keys in a new version, it will need to continue using them for all subsequent versions. Extensions that do not have this property set correctly, and are required to use it, will be prevented from being submitted to addons.mozilla.org for signing with a message explaining why.

In the first half of 2026, Mozilla will require all extensions to adopt this framework. But don’t worry, we’ll give plenty of notice via the add-ons blog. We’re also developing some new features to ease this transition for both extension developers and users, which we will announce here.

The post Announcing data collection consent changes for new Firefox extensions appeared first on Mozilla Add-ons Community Blog.

Niko MatsakisExplicit capture clauses

In my previous post about Ergonomic Ref Counting, I talked about how, whatever else we do, we need a way to have explicit handle creation that is ergonomic. The next few posts are going to explore a few options for how we might do that.

This post focuses on explicit capture clauses, which would permit closures to be annotated with an explicit set of captured places. My take is that explicit capture clauses are a no brainer, for reasons that I’ll cover below, and we should definitely do them; but they may not be enough to be considered ergonomic, so I’ll explore more proposals afterwards.

Motivation

Rust closures today work quite well but I see a few problems:

  • Teaching and understanding closure desugaring is difficult because it lacks an explicit form. Users have to learn to desugar in their heads to understand what’s going on.
  • Capturing the “clone” of a value (or possibly other transformations) has no concise syntax.
  • For long closure bodies, it is hard to determine precisely which values are captured and how; you have to search the closure body for references to external variables, account for shadowing, etc.
  • It is hard to develop an intuition for when move is required. I find myself adding it when the compiler tells me to, but that’s annoying.

Let’s look at a strawperson proposal

Some time ago, I wrote a proposal for explicit capture clauses. I actually see a lot of flaws with this proposal, but I’m still going to explain it: right now it’s the only solid proposal I know of, and it’s good enough to explain how an explicit capture clause could be seen as a solution to the “explicit and ergonomic” goal. I’ll then cover some of the things I like about the proposal and what I don’t.

Begin with move

The proposal begins by extending the move keyword with a list of places to capture:

let closure = move(a.b.c, x.y) || {
    do_something(a.b.c.d, x.y)
};

The closure will then take ownership of those two places; references to those places in the closure body will be replaced by accesses to these captured fields. So that example would desugar to something like

let closure = {
    struct MyClosure {
        a_b_c: Foo,
        x_y: Bar,
    }

    impl FnOnce<()> for MyClosure {
        fn call_once(self) -> Baz {
            do_something(self.a_b_c.d, self.x_y)
            //           ----------    --------
            //   The place `a.b.c` is      |
            //   rewritten to the field    |
            //   `self.a_b_c`              |
            //                  Same here but for `x.y`
        }
    }

    MyClosure {
        a_b_c: self.a.b.c,
        x_y: self.x.y,
    }
};

When using a simple list like this, attempts to reference other places that were not captured result in an error:

let closure = move(a.b.c, x.y) || {
    do_something(a.b.c.d, x.z)
    //           -------  ---
    //           OK       Error: `x.z` not captured
};

Capturing with rewrites

It is also possible to capture a custom expression by using an = sign. So for example, you could rewrite the above closure as follows:

let closure = move(
    a.b.c = a.b.c.clone(),
    x.y,
) || {
    do_something(a.b.c.d, x.z)
};

and it would desugar to:

let closure = {
    struct MyClosure { /* as before */ }
    impl FnOnce<()> for MyClosure { /* as before */ }

    MyClosure {
        a_b_c: self.a.b.c.clone(),
        //     ------------------
        x_y: self.x.y,
    }
};

When using this form, the expression assigned to a.b.c must have the same type as a.b.c in the surrounding scope. So this would be an error:

let closure = move(
    a.b.c = 22, // Error: `i32` is not `Foo`
    x.y,
) || {
    /* ... */
};

Shorthands and capturing by reference

You can understand move(a.b) as sugar for move(a.b = a.b). We support other convenient shorthands too, such as

move(a.b.clone()) || {...}
// == anything that ends in a method call becomes ==>
move(a.b = a.b.clone()) || {...}

and two kinda special shorthands:

move(&a.b) || { ... }
move(&mut a.b) || { ... }

These are special because the captured value is indeed &a.b and &mut a.b – but that by itself wouldn’t work, because the type doesn’t match. So we rewrite each access to a.b to desugar to a dereference of the a_b field, like *self.a_b:

move(&a.b) || { foo(a.b) }

// desugars to

struct MyStruct<'l> {
    a_b: &'l Foo
}

impl FnOnce for MyStruct<'_> {
    fn call_once(self) {
        foo(*self.a_b)
        //  ---------
        //  we insert the `*` too
    }
}

MyStruct {
    a_b: &a.b,
}

move(&a.b) || { foo(*a.b) }

There’s a lot of precedence for this sort of transform: it’s precisely what we do for the Deref trait and for existing closure captures.

Fresh variables

We should also allow you to define fresh variables. These can have arbitrary types. The values are evaluated at closure creation time and stored in the closure metadata:

move(
    data = load_data(),
    y,
) || {
    take(&data, y)
}

Open-ended captures

All of our examples so far fully enumerated the captured variables. But Rust closures today infer the set of captures (and the style of capture) based on the paths that are used. We should permit that as well. I’d permit that with a .. sugar, so these two closures are equivalent:

let c2 = move || /* closure */;
//       ---- capture anything that is used,
//            taking ownership

let c1 = move(..) || /* closure */;
//           ---- capture anything else that is used,
//                taking ownership

Of course you can combine:

let c = move(x.y.clone(), ..) || {

};

And you could write ref to get the equivalent of || closures:

let c2 = || /* closure */;
//       -- capture anything that is used,
//          using references if possible
let c1 = move(ref) || /* closure */;
//            --- capture anything else that is used,
//                using references if possible

This lets you

let c = move(
    a.b.clone(), 
    c,
    ref
) || {
    combine(&a.b, &c, &z)
    //       ---   -   -
    //        |    |   |
    //        |    | This will be captured by reference
    //        |    | since it is used by reference
    //        |    | and is not explicitly named.
    //        |    |
    //        |   This will be captured by value
    //        |   since it is explicitly named.
    //        |
    // We will capture a clone of this because
    // the user wrote `a.b.clone()`
}

Frequently asked questions

How does this help with our motivation?

Let’s look at the motivations I named:

Teaching and understanding closure desugaring is difficult

There’s a lot of syntax there, but it also gives you an explicit form that you can use to do explanations. To see what I mean, consider the difference between these two closures (playground).

The first closure uses ||:

fn main() {
    let mut i = 3;
    let mut c_attached = || {
        let j = i + 1;
        std::mem::replace(&mut i, j)
    };
    ...
}

While the second closure uses move:

fn main() {
    let mut i = 3;
    let mut c_detached = move || {
        let j = i + 1;
        std::mem::replace(&mut i, j)
    };

These are in fact pretty different, as you can see in this playground. But why? Well, the first closure desugars to capture a reference:

let mut i = 3;
let mut c_attached = move(&i) || {...};

and the second captures by value:

let mut i = 3;
let mut c_attached = move(i) || {...};

Before, to explain that, I had to resort to desugaring to structs.

Capturing a clone is painful

If you have a closure that wants to capture the clone of something today, you have to introduce a fresh variable. So something like this:

let closure = move || {
    begin_actor(data, self.tx.clone())
};

becomes

let closure = {
    let self_tx = self.tx.clone();
    move || {
        begin_actor(data, self_tx.clone())
    }
};

This is awkward. Under this proposal, it’s possible to point-wise replace specific items:

let closure = move(self.tx.clone(), ..) || {
    begin_actor(data, self.tx.clone())
};
For long closure bodies, it is hard to determine precisely which values are captured and how

Quick! What variables does this closure use from the environment?

.flat_map(move |(severity, lints)| {
    parse_tt_as_comma_sep_paths(lints, edition)
    .into_iter()
    .flat_map(move |lints| {
        // Rejoin the idents with `::`, so we have no spaces in between.
        lints.into_iter().map(move |lint| {
            (
                lint.segments().filter_map(
                    |segment| segment.name_ref()
                ).join("::").into(),
                severity,
            )
        })
    })
})

No idea? Me either. What about this one?

.flat_map(move(edition) |(severity, lints)| {
    /* same as above */
})

Ah, pretty clear! I find that once a closure moves beyond a couple of lines, it can make a function kind of hard to read, because it’s hard to tell what variables it may be accessing. I’ve had functions where it’s important to correctness for one reason or another that a particular closure only accesses a subset of the values around it, but I have no way to indicate that right now. Sometimes I make separate functions, but it’d be nicer if I could annotate the closure’s captures explicitly.

It is hard to develop an intuition for when move is required

Hmm, actually, I don’t think this notation helps with that at all! More about this below.

Let me cover some of the questions you may have about this design.

Why allow the “capture clause” to specify an entire place, like a.b.c?

Today you can write closures that capture places, like self.context below:

let closure = move || {
    send_data(self.context, self.other_field)
};

My goal was to be able to take such a closure and to add annotations that change how particular places are captured, without having to do deep rewrites in the body:

let closure = move(self.context.clone(), ..) || {
    //            --------------------------
    //            the only change
    send_data(self.context, self.other_field)
};

This definitely adds some complexity, because it means we have to be able to “remap” a place like a.b.c that has multiple parts. But it makes the explicit capture syntax far more powerful and convenient.

Why do you keep the type the same for places like a.b.c?

I want to ensure that the type of a.b.c is the same wherever it is type-checked, it’ll simplify the compiler somewhat and just generally makes it easier to move code into and out of a closure.

Why the move keyword?

Because it’s there? To be honest, I don’t like the choice of move because it’s so operational. I think if I could go back, I would try to refashion our closures around two concepts

  • Attached closures (what we now call ||) would always be tied to the enclosing stack frame. They’d always have a lifetime even if they don’t capture anything.
  • Detached closures (what we now call move ||) would capture by-value, like move today.

I think this would help to build up the intuition of “use detach || if you are going to return the closure from the current stack frame and use || otherwise”.

What would a max-min explicit capture proposal look like?

A maximally minimal explicit capture close proposal would probably just let you name specific variables and not “subplaces”:

move(
    a_b_c = a.b.c,
    x_y = &x.y
) || {
    *x_y + a_b_c
}

I think you can see though that this makes introducing an explicit form a lot less pleasant to use and hence isn’t really going to do anything to support ergonomic RC.

Conclusion: Explicit closure clauses make things better, but not great

I think doing explicit capture clauses is a good idea – I generally think we should have explicit syntax for everything in Rust, for teaching and explanatory purposes if nothing else; I didn’t always think this way, but it’s something I’ve come to appreciate over time.

I’m not sold on this specific proposal – but I think working through it is useful, because it (a) gives you an idea of what the benefits would be and (b) gives you an idea of how much hidden complexity there is.

I think the proposal shows that adding explicit capture clauses goes some way towards making things explicit and ergonomic. Writing move(a.b.c.clone()) is definitely better than having to create a new binding.

But for me, it’s not really nice enough. It’s still quite a mental distraction to have to find the start of the closure, insert the a.b.c.clone() call, and it makes the closure header very long and unwieldy. Particularly for short closures the overhead is very high.

This is why I’d like to look into other options. Nonetheless, it’s useful to have discussed a proposal for an explicit form: if nothing else, it’ll be useful to explain the precise semantics of other proposals later on.

Niko MatsakisMove, Destruct, Forget, and Rust

This post presents a proposal to extend Rust to support a number of different kinds of destructors. This means we could async drop, but also prevent “forgetting” (leaking) values, enabling async scoped tasks that run in parallel à la rayon/libstd. We’d also be able to have types whose “destructors” require arguments. This proposal – an evolution of “must move” that I’ll call “controlled destruction” – is, I think, needed for Rust to live up to its goal of giving safe versions of critical patterns in systems programming. As such, it is needed to complete the “async dream”, in which async Rust and sync Rust work roughly the same.

Nothing this good comes for free. The big catch of the proposal is that it introduces more “core splits” into Rust’s types. I believe these splits are well motivated and reasonable – they reflect inherent complexity, in other words, but they are something we’ll want to think carefully about nonetheless.

Summary

The TL;DR of the proposal is that we should:

  • Introduce a new “default trait bound” Forget and an associated trait hierarchy:
    • trait Forget: Drop, representing values that can be forgotten
    • trait Destruct: Move, representing values with a destructor
    • trait Move: Pointee, representing values that can be moved
    • trait Pointee, the base trait that represents any value
  • Use the “opt-in to weaker defaults” scheme proposed for sizedness by RFC #3729 (Hierarchy of Sized Traits)
    • So fn foo<T>(t: T) defaults to “a T that can be forgotten/destructed/moved”
    • And fn foo<T: Destruct>(t: T) means “a T that can be destructed, but not necessarily forgotten”
    • And fn foo<T: Move>(t: T) means “a T that can be moved, but not necessarily forgotten”
    • …and so forth.
  • Integrate and enforce the new traits:
    • The bound on std::mem::forget will already require Forget, so that’s good.
    • Borrow check can enforce that any dropped value must implement Destruct; in fact, we already do this to enforce const Destruct bounds in const fn.
    • Borrow check can be extended to require a Move bound on any moved value.
  • Adjust the trait bound on closures (luckily this works out fairly nicely)

Motivation

In a talk I gave some years back at Rust LATAM in Uruguay1, I said this:

  • It’s easy to expose a high-performance API.
  • But it’s hard to help users control it – and this is what Rust’s type system does.
Person casting a firespell and burning themselves

Rust currently does a pretty good job with preventing parts of your program from interfering with one another, but we don’t do as good a job when it comes to guaranteeing that cleaup happens2. We have destructors, of course, but they have two critical limitations:

  • All destructors must meet the same signature, fn drop(&mut self), which isn’t always adequate.
  • There is no way to guarantee a destructor once you give up ownership of a value.

Making it concrete.

That motivation was fairly abstract, so let me give some concrete examples of things that tie back to this limitation:

  • The ability to have async or const drop, both of which require a distinct drop signature.
  • The ability to have a “drop” operation that takes arguments, such as e.g. a message that must be sent, or a result code that must be provided before the program terminates.
  • The ability to have async scopes that can access the stack, which requires a way to guarantee that a parallel thread will be joined even in an async context.
  • The ability to integrate at maximum efficiency with WebAssembly async tasks, which require guaranteed cleanup.3

The goal of this post is to outline an approach that could solve all of the above problems and which is backwards compatible with Rust today.

The “capabilities” of value disposal

The core problem is that Rust today assumes that every Sized value can be moved, dropped, and forgotten:

// Without knowing anything about `T` apart
// from the fact that it's `Sized`, we can...
fn demonstration<T>(a: T, b: T, c: T) {
    // ...drop `a`, running its destructor immediately.
    std::mem::drop(a);

    // ...forget `b`, skipping its destructor
    std::mem::forget(b);

    // ...move `c` into `x`
    let x = c;
} // ...and then have `x` get dropped automatically,
// as exit the block.

Destructors are like “opt-out methods”

The way I see, most methods are “opt-in” – they don’t execute unless you call them. But destructors are different. They are effectively a method that runs by default – unless you opt-out, e.g., by calling forget. But the ability to opt-out means that they don’t fundamentally add any power over regular methods, they just make for a more ergonomic API.

The implication is that the only way in Rust today to guarantee that a destructor will run is to retain ownership of the value. This can be important to unsafe code – APIs that permit scoped threads, for example, need to guarantee that those parallel threads will be joined before the function returns. The only way they have to do that is to use a closure which gives &-borrowed access to a scope:

scope(|s| ...)
//     -  --- ...which ensures that this
//     |      fn body cannot "forget" it.
//     |  
// This value has type `&Scope`... 

Because the API nevers gives up ownership of the scope, it can ensure that it is never “forgotten” and thus that its destructor runs.

The scoped thread approach works for sync code, but it doesn’t work for async code. The problem is that async functions return a future, which is a value. Users can therefore decide to “forget” this value, just like any other value, and thus the destructor may never run.

Guaranteed cleanup is common in systems programming

When you start poking around, you find that guaranteed destructors turn up quite a bit in systems programming. Scoped APIs in futures are one example, but DMA (direct memory access) is another. Many embedded devices have a mode where you begin a DMA transfer that causes memory to be written into memory asynchronously. But you need to ensure that this DMA is terminated before that memory is freed. If that memory is on your stack, that means you need a destructor that will either cancel or block until the DMA finishes.4

So what can we do about it?

This situation is very analogous to the challenge of revisiting the default Sized bound, and I think the same basic approach that I outlined in [this blog post][sized] will work.

The core of the idea is simple: have a “special” set of traits arranged in a hierarchy:

trait Forget: Destruct {} // Can be "forgotten"
trait Destruct: Move {}   // Can be "destructed" (dropped)
trait Move: Pointee {}    // Can be "moved"
trait Pointee {}          // Can be referenced by pointer

By default, generic parameters get a Forget bound, so fn foo<T>() is equivalent to fn foo<T: Forget>(). But if the parameter opts in to a weaker bound, then the default is suppressed, so fn bar<T: Destruct>() means that T is assumed by “destructible” but not forgettable. And fn baz<T: Move>() indicates that T can only be moved.

Impact of these bounds

Let me explain briefly how these bounds would work.

The default can forget, drop, move etc

Given a default type T, or one that writes Forget explicitly, the function can do anything that is possible today:

fn just_forget<T: Forget>(a: T, b: T, c: T) {
    //         --------- this bound is the default
    std::mem::drop(a);   // OK
    std::mem::forget(b); // OK
    let x = c;           // OK
}

The forget function requires T: Forget

The std::mem::forget function would require T: Forget as well:

pub fn forget<T: Forget>(value: T) { /* magic intrinsic */ }

This means that if you have only Destruct, the function can only drop or move, it can’t “forget”:

fn just_destruct<T: Destruct>(a: T, b: T, c: T) {
    //           -----------
    // This function only requests "Destruct" capability.

    std::mem::drop(a);   // OK
    std::mem::forget(b); // ERROR: `T: Forget` required
    let x = c;           // OK
}

The borrow checker would require “dropped” values implement Destruct

We would modify the drop function to require only T: Destruct:

fn drop<T: Destruct>(t: T) {}

We would also extend the borrow checker so that when it sees a value being dropped (i.e., because it went out of scope), it would require the Destruct bound.

That means that if you have a value whose type is only Move, you cannot “drop” it:

fn just_move<T: Move>(a: T, b: T, c: T) {
    //           -----------
    // This function only requests "Move" capability.

    std::mem::drop(a);   // ERROR: `T: Destruct` required
    std::mem::forget(b); // ERROR: `T: Forget` required
    let x = c;           // OK
}                        // ERROR: `x` is being dropped, but `T: Destruct`

This means that if you have only a Move bound, you must move anything you own if you want to return from the function. For example:

fn return_ok<T: Move>(a: T) -> T {
    a // OK
}

If you have a function that does not move, you’ll get an error:

fn return_err<T: Move>(a: T) -> T {
} // ERROR: `a` does not implement `Destruct`

It’s worth pointing out that this will be annoying as all get out in the face of panics:

fn return_err<T: Move>(a: T) -> T {
    // ERROR: If a panic occurs, `a` would be dropped, but `T` not implement `Destruct`
    forbid_env_var();

    a
} 

fn forbid_env_var() {
    if std::env::var("BAD").is_ok() {
        panic!("Uh oh: BAD cannot be set");
    }
}

I’m ok with this, but it is going to put pressure on better ways to rule out panics statically.

Const (and later async) variants of Destruct

In fact, we are already doing something much like this destruct check for const functions. Right now if you have a const fn and you try to drop a value, you get an error:

const fn test<T>(t: T) {
} // ERROR!

Compiling that gives you the error:

error[E0493]: destructor of `T` cannot be evaluated at compile-time
 --> src/lib.rs:1:18
  |
1 | const fn test<T>(t: T) { }
  |                  ^       - value is dropped here
  |                  |
  |                  the destructor for this type cannot be evaluated in constant functions

This check is not presently taking place in borrow check but it could be.

The borrow checker would require “moved” values implement Move

The final part of the check would be requiring that “moved” values implement Move:

fn return_err<T: Pointee>(a: T) -> T {
    a // ERROR: `a` does not implement `Move`
}

You might think that having types that are !Move would replace the need for pin, but this is not the case. A pinned value is one that can never move again, whereas a value that is not Move can never be moved in the first place – at least once it is stored into a place.

I’m not sure if this part of the proposal makes sense, we could start by just having all types be Move, Destruct, or (the default) Forget.

Opting out from forget etc

The other part of the proposal is that you should be able to explicit “opt out” from being forgettable, e.g. by doing

struct MyType {}
impl Destruct for MyType {}

Doing this will limit the generics that can accept your type, of course.

Associated type bounds

The tough part with these “default bound” proposals is always associated type bounds. For backwards compatibility, we’d have to default to Forget but a lot of associated types that exist in the wild today shouldn’t really require Forget. For example a trait like Add should really just require Move for its return type:

trait Add<Rhs = Self> {
    type Output /* : Move */;
}

I am basically not too worried about this. It’s possible that we can weaken these bounds over time or through editions. Or, perhaps, add in some kind of edition-specific “alias” like

trait Add2025<Rhs = Self> {
    type Output: Move;
}

where Add2025 is implemented for everything that implements Add.

I am not sure exactly how to manage it, but we’ll figure it out – and in the meantime, most of the types that should not be forgettable are really just “guard” types that don’t have to flow through quite so many places.

Associated type bounds in closures

The one place that I think it is really imporatnt that we weaken the associated type bounds is with closures– and, fortunately, that’s a place we can get away with due to the way our “closure trait bound” syntax works. I feel like I wrote a post on this before, but I can’t find it now, but the short version is that, today, when you write F: Fn(), that means that the closure must return (). If you write F: Fn() -> T, then this type T must have been declared somewhere else, and so T will (independently from the associated type of the Fn trait) get a default Forget bound. So since the Fn associated type is not independently nameable in stable Rust, we can change its bounds, and code like this would continue to work unchanged:

fn foo<T, F>()
where
    F: Fn() -> T,
    //         - `T: Forget` still holds by default
{}

Frequently asked questions

How does this relate to the recent thread on internals?

Recently I was pointed at this internals thread for a “substructural type system” which likely has very similar capabilities. To be totally honest, though, I haven’t had time to read and digest it yet! I had this blog post like 95% done though so I figured I’d post it first and then go try and compare.

What would it mean for a struct to opt out of Move (e.g., by being only Pointee)?

So, the system as I described would allow for ‘unmoveable’ types (i.e., a struct that opts out from everything and only permits Pointee), but such a struct would only really be something you could store in a static memory location. You couldn’t put it on the stack because the stack must eventually get popped. And you couldn’t move it from place to place because, well, it’s immobile.

This seems like something that could be useful – e.g., to model “video RAM” or something that lives in a specific location in memory and cannot live anywhere else – but it’s not a widespread need.

How would you handle destructors with arguments?

I imagine something like this:

struct Transaction {
    data: Vec<u8>
}

/// Opt out from destruct
impl Move for Transaction { }

impl Transaction {
    // This is effectively a "destructor"
    pub fn complete(
        self, 
        connection: Connection,
    ) {
        let Transaction { data } = self;
    }
}

With this setup, any function that owns a Transaction must eventually invoke transaction.complete(). This is because no values of this type can be dropped, so they must be moved.

How does this relate to async drop?

This setup provides attacks a key problem that has blocked async drop in my mind, which is that types that are “async drop” do not have to implement “sync drop”. This gives the type system the ability to prevent them from being dropped in sync code, then, and it would mean that they can only be dropped in async drop. But there’s still lots of design work to be done there.

Why is the trait Destruct and not Drop?

This comes from the const generifs work. I don’t love it. But there is a logic to it. Right now, when you drop a struct or other value, that actually does a whole sequence of things, only one of which is running any Drop impl – it also (for example) drops all the fields in the struct recursively, etc. The idea is that “destruct” refers to this whole sequence.

How hard would this to be to prototype?

I…don’t actually think it would be very hard. I’ve thought somewhat about it and all of the changes seem pretty straightforward. I would be keen to support a lang-team experiment on this.

Does this mean we should have had leak?

The whole topic of destructors and leaks and so forth datesback to approximately Rust 1.0, when we discovered that, in fact, our abstraction for threads was unsound when combined with cyclic ref-counted boxes. Before that we hadn’t fully internalized that destructors are “opt-out methods”. You can read this blog post I wrote at the time. At the time, the primary idea was to have some kind of ?Leak bounds and it was tied to the idea of references (so that all 'static data was assumed to be “leakable”, and hence something you could put into an Rc). I… mostly think we made the right call at the time. I think it’s good that most of the ecosystem is interoperable and that Rc doesn’t require static bounds, and certainly I think it’s good that we moved to 1.0 with minimal disruption. In any case, though, I rather prefer this design to the ones that were under discussion at the time, in part because it also addresses the need for different kinds of destructors and for destructors with many arguments and so forth, which wasn’t something we thought about then.

Isn’t it confusing to have these “magic” traits that “opt out” from default bounds?

I think that specifying the bounds you want is inherently better than today’s ? design, both because it’s easier to understand and because it allows us to backwards compatibly add traits in between in ways that are not possible with the ? design.

However, I do see that having T: Move mean that T: Destruct does not hold is subtle. I wonder if we should adopt some kind of sigil or convention on these traits, like T: @Move or something. I don’t know! Something to consider.


  1. That was a great conference. Also, interestingly, this is one of my favorite of all my talks, but for some reason, I rarely reuse this material. I should change that. ↩︎

  2. Academics distinguish “safety” from “liveness properties”, where safety means “bad things don’t happen” and “liveness” means “good things eventually happen”. Another way of saying this is that Rust’s type system helps with a lot of safety properties but struggles with liveness properties. ↩︎

  3. Uh, citation needed. I know this is true but I can’t find the relevant WebAssembly issue where it is discussed. Help, internet! ↩︎

  4. Really the DMA problem is the same as scoped threads. If you think about it, the embedded device writing to memory is basically the same as a parallel thread writing to memory. ↩︎

Mozilla Addons BlogDeveloper Spotlight: Fox Recap

The Fox Recap team (pictured left to right): Taimur Hasan, Mozilla community manager Matt Cool, Kate Sawtell, Diego Valdez (not pictured: Peter Mitchell).

“What if we did a Spotify Wrapped for your browser?” wondered a group of Cal State Monterey Bay computer science students. That was the initial spark of an idea that became Fox Recap — a Firefox extension that leverages machine learning to give Firefox users fascinating insights into their browsing habits, like peak usage hours, types of websites commonly visited (news, entertainment, shopping, etc.), navigation patterns, and more.

Taimur Hasan was one of four CSMB students who built Fox Recap as part of a Mozilla-supported Capstone project. We spoke with Taimur about his experience building an AI-centered extension from scratch.

What makes Fox Recap an “AI” project?

Taimur Hasan: Fox Recap uses Machine Learning behind the scenes to classify sites and generate higher level insights, like top/trending categories and transition patterns. I kept the “AI” messaging light on the listing page to avoid hype and focus on the experience. Ideally the AI features feel seamless and natural rather than front and center.

What was your most challenging development hurdle?  

TH: For me, the most challenging part of development was creating the inference pipeline, which means the part where you actually use the AI model to do something useful. It took careful optimization to run well on a typical laptop as load times were a priority.

What is your perception of young emergent developers like yourself and their regard for privacy on the web?

TH: With data collection on the rise, privacy and security matter more than ever. Among dedicated and enthusiastic young developers, privacy will always be in mind.

How do you see AI and browser extensions interrelating in the coming years? Do you have a sense of mutual direction?

TH: I expect wider use of small, task specific models that quietly improve the user experience in most browser extensions. For mutual direction in the browser and add-on space I can see the use of AI in manipulating the DOM being done pretty heavily in the future.

Any advice for other extension developers curious about AI integration?  

TH: Be clear about the use case and model choice before investing in training or fine tuning. Start simple, validate the value, then add complexity only if it clearly improves the experience.

To learn even more about Fox Recap’s development process, please see Fox Recap: A student-built tool that analyzes your browsing habits.

The post Developer Spotlight: Fox Recap appeared first on Mozilla Add-ons Community Blog.

Mozilla Privacy BlogBehind the Manifesto: Standing up for encryption to keep the internet safe

Welcome to the first blog of the series “Behind the Manifesto,” where we unpack core issues that are critical to Mozilla’s mission. The Mozilla Manifesto represents Mozilla’s commitment to advancing an open, global internet. This blog series digs deeper on our vision for the web and the people who use it, and how these goals are advanced in policymaking and technology. 

At Mozilla, we’ve long said the internet is one of the world’s most important public resources, something that only thrives when guided by core principles. One of those principles is that individual security and privacy online are fundamental.

Encryption is the technology that makes secure and private online interactions possible. It protects our messages, our data, and our privacy, sitting in the center of security and trust on the internet. Given its critical role in online privacy, it can be a focal point for policymakers.

The truth is, encryption is integral to digital trust and safety. Strong encryption keeps us safe while weak encryption puts our personal, financial, and health data at risk. 

In recent years, we’ve seen governments around the world test ways to undermine encryption to access private conversations and data, often framing it as critical to combating crime. From proposals in the EU that could allow law enforcement to read messages before they are encrypted, to the UK Government’s pushback on Apple’s rollout of iCloud end-to-end encryption, or U.S. legislation that would require platforms to provide access to encrypted data, the pressure to weaken encryption is growing globally.

Governments and law enforcement agencies face complex and legitimate challenges in protecting the public from serious crime and emerging online threats. Their work is critical to ensuring safety in an increasingly digital world. But weakening encryption is not the solution. Strong encryption is what keeps everyone safe — it protects citizens, officials, and infrastructure alike. It is the foundation that safeguards people from intrusive surveillance and shields their most sensitive data from those who would exploit it for harm. We must work together to find solutions that both uphold public safety and prevent the erosion of the privacy and security that strong encryption provides.

With encryption increasingly under threat, this year’s Global Encryption Day (October 21) is the perfect moment to reflect on why strong encryption matters for every internet user.

At Mozilla, we believe encryption isn’t a luxury or privilege. It is a necessity for protecting data against unauthorized access. Our commitment to end-to-end encryption is strong because it is essential to protecting people and ensuring the internet remains open and secure.

That’s why Mozilla has taken action for years to protect and advance encryption. In 2023, we joined the Global Encryption Coalition Steering Committee, working with partners around the world to promote encryption and push back on proposals for backdoor access.

In the U.S., we’ve advanced encryption in our 2025 U.S. policy prioritiesjoined amicus briefs, and raised concerns with bills like the U.S. EARN IT Act. In the EU, we ran a multi-year campaign on the eIDAS Regulation working alongside civil society, academics, and industry experts to address concerns that Article 45 threatened to undermine the encryption and authentication technologies used on the web. With such a massive risk to web security, Mozilla, with allies, took action, releasing detailed position papers and joint statements.  All of our efforts have been to safeguard encryption, privacy, and digital rights. Why? Because the bottom line is simple: backdoor policies diminish the trust that allows the web to be a safe and reliable public resource.

Mozilla’s strong commitment to protecting privacy isn’t just a policy priority; it’s the foundation of our products and initiatives. Below, we’d like to share some of the ways in which Mozilla partnered with allies to make encryption a reality and a core function of the open internet ecosystem.

  • Mozilla is among the co-founders of Let’s Encrypt, a nonprofit Certificate Authority run by the Internet Security Research Group (ISRG), alongside partners like the EFF and the University of Michigan. This project made HTTPS certificates free and automatically renewable, transforming HTTPS from a costly, complex setup into a default expectation across the web. As a result, the share of encrypted traffic skyrocketed from less than 40% in 2016 to around 80% by 2019.
  • Mozilla closely collaborated with Cloudflare to roll-out Encrypted Client Hello (ECH) in Firefox in 2023, which encrypts the first “Hello” message of a user’s TLS connection so that even the website name is hidden from network observers.
  • Mozilla has most recently set a new standard for certificate revocation on the web, advancing encryption and security. In April 2025, Firefox became the first (and is still the only) browser that has deployed CRLite, the technology invented by a group of researchers that ensures revoked HTTPS certificates are identified quickly and privately without leaking unencrypted browsing activity to third parties.
  • In 2024, Firefox became the first browser to support DTLS 1.3 providing the most robust end-to-end encryption of real-time audio and video data, including all your web conferencing.

It’s easy to say we care about encryption, but it only works if the commitment is shared by the policymakers writing our laws and the engineers designing our systems.

As Neha Kochar, Director of Firefox Security and Privacy puts it: “Whether you’re visiting your bank’s website or sending a family photo, Firefox works behind the scenes to keep your browsing secure. With no shareholders to answer to, we serve only you — open-source and transparent by design, with verifiable guarantees that not even Mozilla knows which websites you visit or what you do online.”

That is why Global Encryption Day is such an important moment. If a system is weakened or broken, it opens vulnerabilities that anyone with the right tools can exploit. By standing up for encryption and the policies that protect it, we help ensure the internet remains safe, open, and fair for everyone.

To dig deeper on encryption, check out these partner resources: Global Encryption Coalition, Internet Society and Global Partners Digital.

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges and Sema Karaman on LinkedIn for further insights into Mozilla’s policy priorities.

The post Behind the Manifesto: Standing up for encryption to keep the internet safe appeared first on Open Policy & Advocacy.

The Servo BlogServo 0.0.1 Release

Today, the Servo team has released new versions of the servoshell binaries for all our supported platforms, tagged v0.0.1. These binaries are essentially the same nightly builds that were already available from the download page with additional manual testing, now tagging them explicitly as releases for future reference.

We plan to publish such a tagged release every month. For now, we are adopting a simple release process where we will use a recent nightly build and perform additional manual testing to identify issues and regressions before tagging and publishing the binaries.

There are currently no plans to publish these releases on crates.io or platform-specific app stores. The goal is just to publish tagged releases on GitHub.

Mozilla ThunderbirdThunderbird Monthly Development Digest: September 2025

Hello again from the Thunderbird development team! This month’s sprints have been about focus and follow-through, as we’ve tightened up our new Account Hub experience and continued the deep work on Exchange Web Services (EWS) support. While those two areas have taken centre stage, we’ve also been busy adapting to a wave of upstream platform changes that demanded careful attention to keep everything stable and our continuous integration systems happy. Alongside this, our developers have been lending extra support to the Services team to ensure a smooth path for upcoming releases. It’s been a month of steady, detail-oriented progress – the kind that doesn’t always make headlines, but lays the groundwork for the next leaps forward.

Exchange Web Services support announcement for 145

While support for Microsoft Exchange via EWS landed in Thunderbird 144, the new “Account Hub” setup interface had a few outstanding priorities which required our attention. Considering that the announcement of EWS support will likely generate a large spike in secondary account additions, we felt it important enough to delay the announcement in order to polish the setup interface and make the experience better for the users taking advantage of the new features. The team working on the “back end” took the opportunity to deliver more features that had been in our backlog and address some bugs that were reported by users who are already using EWS on Beta and Daily:

  • Offline message policy
  • Soft delete / copy to Trash
  • Empty Trash
  • Notifications with message preview
  • Reply-to multiple recipients bug
  • Mark Folder as read
  • Experimental tenant-specific configuration options (behind a preference) now being tested with early adopters

Looking ahead, the team is already focused on our work week where we’ll have chance to put plans in place to tackle some architectural refactoring and the next major milestones in our EWS implementation for Calendar and Address Book.

We were also delighted to work with a community contributor who has been hard at work on adding support for the FindItem operation. We know some of our workflows are tricky so we very much appreciate the support and patience required!

Keep track of feature delivery here. 

Account Hub

We’ve now added the ability to manually edit any configuration from the first screen. This effectively bypasses the simpler detection methods which don’t work for every configuration. Upon detection failure, a user is now able to switch between protocols and choose EWS configuration.

Other notable items being rolled into 145 are:

  • Redirect warning and handling to prevent a hang for platforms using autodiscover on a 3rd party server
  • Authentication step added for Exchange discovery requiring credentials
  • Ability to cancel the account configuration detection process
  • Improvements to the experience for users with screen reading technology

The creation of address books through the Account Hub is now the experience by default on 145 which is coming to Beta release users this week and monthly Release users before I write next.

Follow progress in the Meta Bug

Calendar UI Rebuild

With the front end team mainly focused on Account Hub, the Calendar UI project has moved slowly this past month. We’ve concentrated the continued work in the following areas:

  • Acceptance widget
  • Title and close button
  • Dialog repositioning on resize
  • Migrating calendar strings from legacy .dtd files into modern .ftl files and preserving translations to avoid repeat work for our translation community.

Maintenance, Upstream adaptations, Recent Features and Fixes

With our focused maintenance sprint over, the team kept the Fluent Migration and moz-src migration projects moving in the background. They also handled another surge of upstream changes requiring triage. In addition to these items, the development community has helped us deliver a variety of improvements over the past month:

If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: September 2025 appeared first on The Thunderbird Blog.

Firefox Add-on ReviewsReddit revolutionized — use a browser extension to enhance your favorite forum

Reddit is awash with great conversation (well, not all the time). There’s a Reddit forum for just about everybody — sports fans, gamers, poets inspired by food, people who like arms on birds — you get the idea. 

If you spend time on Reddit, there are ways to augment your experience with a browser extension… 

Reddit Enhancement Suite

Used by millions of Redditors across various browsers, Reddit Enhancement Suite is optimized to work with the beloved “old Reddit”. 

Key features: 

  • Subreddit manager. Customize the top nav bar with your own subreddit shortcuts. 
  • Account switcher. Easily manage multiple Reddit accounts with a couple quick clicks. 
  • Show “parent” comment on hover. When you mouse over a comment, its “parent” comment displays. 
  • Dashboard. Fully customizable dashboard showcases content from subreddits, your message inbox & more. 
  • Tag specific users and subreddits so their activity appears more prominently
  • Custom filters. Select words, subreddits, or even certain users you want filtered out of your scrolling experience. 
  • New comment count. See the number of new comments on a thread since your last visit. 
  • Neverending Reddit. Just keep scrolling. Never stop!

Old Reddit Redirect

Speaking of the former design, Old Reddit Redirect provides a straightforward function. It simply ensures that every Reddit page you visit will redirect to the old.reddit.com domain. 

Sure, if you have a Reddit account the site gives you the option of using the old design, but with the browser extension you’ll get the old site regardless of being logged in or not. It’s also great for when you click Reddit links shared from the new domain. 

Sink It for Reddit

Designed to “make Reddit’s web version actually usable,” Sink It for Reddit is built for people craving a minimalist discussion platform.

Color coded comments are much simpler to navigate, especially with Sink It’s brilliant Adaptive Dark Mode feature. Give this privacy respecting extension a try if you desire a laser focused Reddit experience.

Reddit Comment Collapser

No more getting lost in confusing comment threads for users of old.reddt.com. Reddit Comment Collapser cleans up your commentary view with a simple mouse click.

Compatible with Reddit Enhancement Suite and Old Reddit Redirect, this single-use extension is beloved by many seeking a minimalist view of the classic Reddit.

Reddit on YouTube

Bring Reddit with you to YouTube. Whenever you’re on a YouTube page, Reddit on YouTube searches for Reddit posts that link to the video and embeds those comments into the YouTube comment area. 

You can easily toggle between Reddit and YouTube comments and select either one to be your default preference. 

<figcaption class="wp-element-caption">If there are multiple Reddit threads about the video you’re watching, the extension will display them in tab form in the YouTube comment section. </figcaption>

Reddit Ad Remover

Sick of seeing so many “Promoted” posts and paid advertisements in the feed and sidebar? Reddit Ad Remover silences the noise. 

The extension even blocks auto-play video ads, which is great for people who don’t appreciate sudden bursts of commercial sound. Hey, somebody should create a subreddit about this

Happy redditing, folks. Feel free to explore more news and media extensions on addons.mozilla.org.

Firefox Add-on ReviewsBoost your writing skills with a browser extension

Whatever kind of writing you do — technical documentation, corporate communications, Harry Potter-vampire crossover fan fiction — it probably happens online. Here are some fabulous browser extensions that will benefit anyone who writes on the web. Get grammar help, productivity tools, and other strong writing aids… 

LanguageTool

It’s like having your own copy editor with you wherever you write on the web. Language Tool – Grammar and Spell Checker will make you a better writer in 25+ languages. 

More than just a spell checker, LanguageTool also…

  • Recognizes common misuses of similar sounding words (e.g. there/their or your/you’re)
  • Works on social media sites and email
  • Offers alternate phrasing and style suggestions for brevity and clarity

Dictionary Anywhere

Need a quick word definition? With Dictionary Anywhere just double-click any word you find on the web and get an instant pop-up definition. 

You can even save and download words and their definitions for later offline reference. 

<figcaption class="wp-element-caption">Dictionary Anywhere — no more navigating away from a page just to get a word check.</figcaption>

Yomitan

Think of Yomitan as a dictionary extension that doubles as a language learning tool. Decipher and define text in 20+ languages.

As you navigate foreign language websites, Yomitan is right there with you to not only help define unfamiliar words and phrases, but also provide audio pronunciation guidance, flashcard creation for future study, offline support and more — all within a privacy protective framework.

Power Thesaurus

Every writer occasionally struggle to find the perfect word. Bring Power Thesaurus with you wherever you write on the web to gain instant access to alternative phrasing.

Simply highlight any word and pop up a handy thesaurus (also includes word definitions and antonyms). Power Thesaurus is a priceless tool for writers who labor over every word.

Dark Background and Light Text

Give your eyes a break. Dark Background and Light Text makes staring at blinking words all day a whole lot easier on your lookers. 

Really simple to use out of the box. Once installed, the extension’s default settings automatically flip the colors of every web page you visit. But if you’d like more granular control of color settings, just click the extension’s toolbar button to access a pop-up menu that lets you customize color schemes, set page exceptions for sites you don’t want colors inverted, and more simple controls. 

<figcaption class="wp-element-caption">Dark Background and Light Text goes easy on the eyes.</figcaption>

Clippings

If your online writing requires the repeated use of certain phrases (for example, work email templates or customer support responses), Clippings can be a huge time saver. 

Key features…

  • Create a practically limitless library of saved phrases
  • Paste your clippings anywhere via context menu
  • Organize batches of clippings with folders and color coded labels
  • Shortcut keys for power users
  • Extension supported in English, Dutch, French, German, and Portuguese (Brazil)
<figcaption class="wp-element-caption">Clippings handles bulk cutting/pasting. </figcaption>

We hope these extensions take your prose to the next level. Some writers may also be interested in this collection of great productivity extensions to help organize your writing projects. Feel free to explore thousands of other useful extensions on addons.mozilla.org

Firefox Add-on ReviewsExtension starter pack

You’ve probably heard about “ad blockers,” “tab managers,” “anti-trackers” or any number of browser customization tools commonly known as extensions. And maybe you’re intrigued to try one, but you’ve never installed an extension before and the whole notion just seems a bit vague. 

Let’s demystify extensions. 

An extension is simply an app that runs on a browser like Firefox. From serious productivity and privacy enhancing features to fun stuff like changing the way the web looks and feels, extensions give you the power to completely personalize your browsing experience. 

Addons.mozilla.org (AMO) is a discovery site that hosts thousands of independently developed Firefox extensions. It’s a vast and eclectic ecosystem of features, so we’ve hand-picked a small collection of great extensions to get you started…

I’ve always wanted an ad blocker!

uBlock Origin

Works beautifully “right out of the box.” Just add it to Firefox and uBlock Origin will automatically start blocking all types of advertising — display ads, banners, video pre-rolls, pop-ups — you name it. 

Of course, if you prefer deeper content blocking customization, uBlock Origin allows for fine control as well, like the ability to import your own custom block filters or access a data display that shows how much of a web page was blocked by the extension. More than just an ad blocker, uBlock Origin also effectively thwarts some websites that may be infected with malware. 

For more insights about this excellent ad blocker, please see uBlock Origin — everything you need to know about the ad blocker, or to explore even more ad blocker options, check out What’s the best ad blocker for you?

I’m concerned about my privacy and tracking around the web

Privacy Badger

The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is programmed to look for tracking heuristics (i.e. specific actions that indicate someone is trying to track you).

Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage “supercookies,” canvas fingerprinting, and other sneaky tracking methods.

Consent-O-Matic

Not only will Consent-O-Matic automatically handle pop-up data consent forms (per GDPR regulations), but it’s brilliantly designed to interpret the often intentionally confusing language of consent pop-ups trying to trick you into agreeing to invasive tracking.

Developed by internet privacy researchers at Aarhus University in Denmark who grew sick of seeing so many deceptive consent pop-ups, Consent-O-Matic’s decision-making logic is built upon studying hundreds of pop-ups and identifying duplicitous patterns. So using this extension not only gives you a great ally in the fight against intrusive tracking, but you’re spared the annoyance of constantly clicking consent forms all over the internet.

I need an easier way to translate languages

Simple Translate

Do you do a lot of language translations on the web? If so, it’s a hassle always copying text and navigating away from the page you’re on just to translate a word or phrase. Simple Translate solves this problem by giving you the power to perform translations right there on the page. 

Just highlight the text you want translated and right-click to get instant translations in a handy pop-up display, so you never have to leave the page again. 

My grammar in speling is bad!

LanguageTool

Anywhere you write on the web, LanguageTool will be there to lend a guiding editorial hand. It helps fix typos, grammar problems, and even recognizes common word mix-ups like the there/their/they’re. 

Available in 25 languages, LanguageTool automatically works on any web-based publishing platform like Gmail, web docs, social media sites, etc. The clever extension will even spot words you’re possibly overusing and suggest alternatives to spruce up your prose. 

YouTube your way

Improve YouTube!

Boasting 175+ customization features, Improve YouTube! is simple to grasp while providing a huge variety of ways to radically alter YouTube functionality. 

Key features include… 

  • Customize YouTube’s layout with different color schemes
  • Create shortcuts for common actions like skipping to next video, scrolling back/forward 10 seconds, volume control & more
  • Filter out unwanted elements like Related Videos, Shorts, Comments, etc.
  • Ad blocking (with ability to allow ads from channels you choose to support)
  • Simple screenshot and save features
  • Playlist shuffle
  • Frame by frame scrolling
  • High-def default video quality

I’m drowning in browser tabs! Send help! 

OneTab

You’ve got an overwhelming number of open tabs. You can’t close them. You need them. But you can’t organize them all right now either. You’re too busy. What to do?! 

If you have OneTab on Firefox you just click the toolbar button and suddenly all those open tabs become a clean list of text links listed on a single page. Ahhh serenity.

Not only will you create browser breathing room for yourself, but with all those previously open tabs now closed and converted to text links, you’ve also freed up a bunch of CPU and memory, which should improve browser speed and performance. 

If you’ve never installed a browser extension before, we hope you found something here that piques your interest to try. To continue exploring ways to personalize Firefox through the power of extensions, please see our collection of 100+ Recommended Extensions

The Mozilla BlogWindows 10 updates are ending. Here’s what it means for Firefox users.

Firefox logo with orange fox wrapped around purple globe.

This week Microsoft released the final free monthly update to Windows 10. While this marks the end of support from Microsoft, Firefox will continue to support Windows 10 for the foreseeable future.

If you remain on Windows 10, you will continue to get the same updates to Firefox you do today, with all of our latest feature improvements and bug fixes. This includes our commitment to resolve security vulnerabilities as rapidly as we can, sometimes in less than 24 hours, with special security updates. Windows 10 remains a primary platform for Firefox users. Unlike older versions of Windows like Windows 7 and 8, where Mozilla is only offering security updates to Firefox, Windows 10 will get the latest and greatest features and bug fixes just like users on Windows 11. 

Should you upgrade to Windows 11?

While Mozilla will continue to deliver the latest updates to Firefox on Windows 10, security online also requires continued updates from Microsoft to Windows 10 itself, and to the many other software and devices that you use on your Windows 10 computer. That’s why we recommend upgrading to Windows 11 if your computer supports it. You can find out if your PC can run Windows 11 and upgrade to it for free from your Windows update settings. With this option, when you start up Windows 11 for the first time you’ll find that Firefox is still installed, and all of your data and settings are just like you left them. 

If your computer cannot run Windows 11, or you wish to remain on Windows 10 for other reasons, your next best option is to make sure you’re getting “extended security updates” from Microsoft. While these updates won’t deliver new Windows features or non-security bug fixes, they will fix security vulnerabilities that are found in Windows 10 in the future. You should see an option to “enroll” in these updates in your Windows update settings, and if you choose the “Windows Backup” option you’ll get the updates for free. Microsoft has more information on Windows 10 extended security updates if you have other questions. 

Preparing for a device upgrade or new PC

If you get a new Windows 11 PC you might be surprised to see that even if you used Windows Backup, non-Microsoft apps like Firefox haven’t migrated with you. You will typically get a link in your start menu or on your desktop to re-install Firefox, and after it’s installed you’ll find that everything is “fresh” — without your bookmarks, saved passwords, browsing history, or any of your other data and settings. 

This can be frustrating, but we do have a solution for you if you prepare in advance and back up your data using Firefox sync through a Mozilla account. To get started with sync, just choose “sign in” from the Firefox toolbar or menu, and we’ll walk you through the quick process of creating a Mozilla account and enabling sync. 

Firefox sync helps transfer your data securely

Sync isn’t just for people who have Firefox running on more than one computer. It’s also a safe way to back up your data and protect yourself against a lost laptop, a computer that breaks down or is damaged, or your own excited forgetfulness if you get rid of your old PC the moment you get a new one. And what many Firefox users may not realize is that Firefox sync is “end-to-end encrypted,” which is a fancy way of saying that not even Mozilla can read your data. Without your password, which we don’t know, your data is an indecipherable scramble even to us. But it’s safe on our servers nonetheless, which means that if you find yourself with a new PC and a “fresh” Firefox, all you need to do is log in and all your bookmarks, passwords, history and more will quickly load in. 

Meanwhile, you can also rest assured that if you continue to use Firefox on Windows 10 over the next few years, we’ll let you know through messages in Firefox if there is new information about staying secure and whether our stance regarding our support for Windows 10 needs to change. 

Thanks for using Firefox, and know that you can always reach us at Mozilla Connect. We’re eager for your feedback and questions.

Take control of your internet

Download Firefox

The post Windows 10 updates are ending. Here’s what it means for Firefox users. appeared first on The Mozilla Blog.

The Rust Programming Language Blogdocs.rs: changed default targets

Changes to default build targets on docs.rs

This post announces two changes to the list of default targets used to build documentation on docs.rs.

Crate authors can specify a custom list of targets using docs.rs metadata in Cargo.toml. If this metadata is not provided, docs.rs falls back to a default list. We are updating this list to better reflect the current state of the Rust ecosystem.

Apple silicon (ARM64) replaces x86_64

Reflecting Apple's transition from x86_64 to its own ARM64 silicon, the Rust project has updated its platform support tiers. The aarch64-apple-darwin target is now Tier 1, while x86_64-apple-darwin has moved to Tier 2. You can read more about this in RFC 3671 and RFC 3841.

To align with this, docs.rs will now use aarch64-apple-darwin as the default target for Apple platforms instead of x86_64-apple-darwin.

Linux ARM64 replaces 32-bit x86

Support for 32-bit i686 architectures is declining, and major Linux distributions have begun to phase it out.

Consequently, we are replacing the i686-unknown-linux-gnu target with aarch64-unknown-linux-gnu in our default set.

New default target list

The updated list of default targets is:

  • x86_64-unknown-linux-gnu
  • aarch64-apple-darwin (replaces x86_64-apple-darwin)
  • x86_64-pc-windows-msvc
  • aarch64-unknown-linux-gnu (replaces i686-unknown-linux-gnu)
  • i686-pc-windows-msvc

Opting out

If your crate requires the previous default target list, you can explicitly define it in your Cargo.toml:

[package.metadata.docs.rs]
targets = [
    "x86_64-unknown-linux-gnu",
    "x86_64-apple-darwin",
    "x86_64-pc-windows-msvc",
    "i686-unknown-linux-gnu",
    "i686-pc-windows-msvc"
]

Note that docs.rs continues to support any target available in the Rust toolchain; only the default list has changed.

Firefox Add-on ReviewsTranslate the web easily with a browser extension

At Mozilla, of course we’re fans of Firefox’s built-in, privacy-focused translation feature, but the beauty of browser extensions is the vast array of niche tools and customization features they can provide. Sometimes finding the right extension for your personal needs can profoundly change the way you interact with the web. So if you do a lot of translating on the internet, you might consider using a specialized extension translator. Here are some great options…

I just want a simple, efficient way to translate. I don’t need fancy features.

Simple Translate

It doesn’t get much simpler than this. Highlight the text you want to translate and click the extension’s toolbar icon to activate a streamlined pop-up. Your highlighted text automatically appears in the pop-up’s translation field and a drop-down menu lets you easily select your target language. Simple Translate also features a handy one-click “Translate this page” button. 

Translate Web Pages

Maybe you just need to translate full web pages, like when reading news articles, how-to guides, or job related sites. If so, Translate Web Pages might be the ideal solution for you with its sharp focus on full-page utility. 

The extension includes a handy feature if you commonly translate a few languages — you can select up to three languages to easily access with a single-click popup menu. TWP also gives you the option to designate specific websites you always want translated without prompt.

S3.Translator

Supporting 100+ languages, S3.Translator serves up a full feature set of language tools, like the ability to translate full or select portions of a page, text-to-speech translation, YouTube subtitle translations, and more.

There’s even a nifty Learning Language mode, which allows you to turn any text into the language you’re studying. Toggle between languages so you can conveniently learn as you naturally browse the web.

To Google Translate

Very popular, very simple translation extension that exclusively uses Google’s translation services, including text-to-speech. 

Simply highlight any text on a web page and right-click to pull up a To Google Translate context menu that allows three actions: 1) translate into your preferred language; 2) listen to audio of the text; 3) Translate the entire page

<figcaption class="wp-element-caption">Right-click any highlighted text to activate To Google Translate.</figcaption>

I do a ton of translating. I need power features to save me time and trouble.

ImTranslator

Striking a balance between out-of-the-box ease and deep customization potential, ImTranslator leverages three top translation engines (Google, Bing, Translator) to cover 100+ languages.

Other strong features include text-to-speech, dictionary and spell check in eight languages, hotkey customization, and a huge assortment of ways to customize the look of ImTranslator’s interface — from light and dark themes to font size and more. 

Immersive Translate

One of the most feature packed translation extensions you’ll find, Immersive Translate goes beyond the web to capably handle PDF’s, eBooks and much more.

With more features than we have space to list, here are some of the most uniquely compelling capabilities of Immersive Translate.

  • Smartly identifies the main content portions of a web page to provide elegant side-by-side bilingual translations while avoiding page clutter
  • Mouse hover translations
  • Input translation box, so you can enter text to be translated (an ideal tool for real-time bilingual conversations)
  • Video subtitle translations
  • Strong Desktop and mobile support

Mate Translate

A slick, intuitive extension that performs all basic translation functions very well, but it’s Mate Translate’s paid tier that unlocks some unique features, such as Sync (saved translations can appear across devices and browsers, including iPhones and Mac). 

There’s also a neat Phrasebook feature, which lets you build custom word and phrase lists so you can return to common translations you frequently need. It works offline, too, so it’s ideal for travellers who need quick reference to common foreign phrases. 

These are some of our favorites, but there are plenty more translation extensions to explore on addons.mozilla.org.

The Mozilla BlogFox Recap: A student-built tool that analyzes your browsing habits

What would your browser history say about you? Whether you were getting things done this week or just collecting tabs, a new Firefox extension helps you reflect on your digital habits. 

Designed as a personal productivity tool, Fox Recap is a capstone project from a group of college seniors at California State University, Monterey Bay. It categorizes your browsing history, shows how much time you’re spending on different sites, and turns that data into simple visual reports. Everything happens locally on your device, so your information stays private.

Related story: Developer Spotlight: Fox Recap

Gradient intro card inviting a dive into today’s browser activity overview
Browser activity stat card showing Technology as most-clicked with 37 visits

How Fox Recap works

Once you download and open the extension on Firefox for desktop, click on settings and grant permission to run the ML engine. From there, you can choose to view your browsing history for today, this week or this month. 

Fox Recap then lays out your activity in simple charts and categories like technology, shopping, education and entertainment.

“It’s really a tool for you to know how you use your browser,” said one of the student developers, Taimur Hasan. “Maybe you want to lessen the amount of time you spend on entertainment, and see that you use more education sites.”

Kate Sawtell wanted to create a tool that helps people see how they spend their time on the internet. “As a busy mom with a bunch of side projects, I love how it shows where my time online actually goes,” Kate said. “Am I researching, streaming shows or slipping into online shopping holes? It’s not super serious or judgmental, just a quick snapshot of my habits. Sometimes it makes me feel productive, other times it’s like, wow okay maybe I should chill on the shopping tab.”

Four people standing in front of a Firefox Recap project display at California State University, Monterey Bay. From left to right: Taimur Hasan, Mozilla community manager Matt Cool, Kate Sawtell, and Diego Valdez.<figcaption class="wp-element-caption">Members of the Fox Recap team at California State University, Monterey Bay, presenting their capstone project. Pictured (left to right): Taimur Hasan, Mozilla community manager Matt Cool, Kate Sawtell, and Diego Valdez. Not pictured: Peter Mitchell.</figcaption>

‘Useful AI and strong privacy can coexist’

Firefox machine learning engineer Tarek Ziadé served as a mentor for the project. He was struck by how quickly Taimur, Kate, Diego and Peter internalized both the technical challenges of building AI features and their privacy implications. 

“I had assumed younger developers might treat privacy as an afterthought,” Tarek said. “I was wrong. They pushed for privacy by design from the start.”

Taimur, who trained the model himself rather than using an existing one, explained: “It’s not an off-the-shelf model that I pulled off the internet. I trained it myself using my gaming computer.”

Browser activity stat card showing Technology as most-clicked with 37 visits

Tarek believes that what the group built reflects the direction in which privacy-focused technology is headed.

“Intelligence should be local by default, data should be minimized, and anything that needs to leave the device should be explicit and consented,” Tarek said. “As AI capabilities become a commodity, the differentiator will be trust.”

That’s exactly where Mozilla should be leading, Tarek added: “making high-quality, on-device AI the default, and proving that useful AI and strong privacy can coexist.”

A glimpse of the next generation of web builders

For team member Diego Valdez, the project’s value is personal and practical: “I hope people who use Fox Recap can learn about their browsing activity in an engaging way, in hopes [of helping them] improve their productivity.”

Mozilla community manager Matt Cool sees it in a larger frame. “It’s a scary and exciting time to enter the tech industry,” Matt said. “The next generation of open web builders is already stepping up. Right here in Monterey, they’re building real-world projects, contributing to open-source, and tackling some of the toughest problems facing the future of the web.”

Fox Recap is one of several student projects showcased at this spring’s Capstone Festival by the School of Computing and Design at Cal State Monterey Bay. Professor Bude Su, who chairs the department, emphasized the value of mentorship as students prepare for what comes next.

“Mozilla’s involvement brought an added layer of motivation for our students,” Professor Su said. “The opportunity to work on a real-world project under industry mentorship has been invaluable for our students’ learning and professional growth.”

The collaboration shows what can happen when education, mentorship and Mozilla’s values of openness and trust come together. Fox Recap helps make sense of the tabs we collect, but it also points to something bigger: a new wave of developers building tools that respect the people who use them.

Take control of your internet

Download Firefox

The post Fox Recap: A student-built tool that analyzes your browsing habits  appeared first on The Mozilla Blog.

The Mozilla BlogThe social media director who helps make Merriam-Webster go viral

A bearded man in a denim shirt over a dark T-shirt, against a green background with a layered pixel effect.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with John Sabine, the social media director of Merriam-Webster and Encyclopedia Britannica. He talks about his favorite subreddit, silly deep dives and why his job makes him hopeful about the internet.

What is your favorite corner of the internet?

Honestly, it’s the “AskHistorians” subreddit. It’s one of my few internet habits that I have that has kept up. I can’t recommend it enough. I wish more things were curated with such level of scrutiny and scholarship. If people disagree, they disagree as Ph.D. people disagree. I don’t have a Ph.D., but I imagine it’s respectful. There’s profiles and avatars, but those feel very secondary to the content. You lead with the “what,” and then you can look up the “who” afterwards. I don’t post on Reddit at all; I’m a lurker in general on the internet. So I’m shocked by how many people weigh in on things.

What is an internet deep dive that you can’t wait to jump back into?

I have a bunch of articles that I have bookmarked… and my goal is to read one of the 400 articles I have saved. What I’m looking forward to specifically is just to read an article for joy, that’s not doomscrolling or part of my job. I do feel like when you have this job, you kind of get internet-ed out every day. And also: crosswords. I want to get better at crosswords, if that counts. We have one on merriam-webster.com, and I also do The New York Times, though I rarely finish it.

What’s the last great story that you read?

It was on ringer.com. A writer named Tyler Parker went through NBA names. He just ranked their names, had nothing to do with basketball. I started it before bed, and I was like, “Oh, I’ll skim.” I read every single word. He really thought about the names and how they make people feel. And it’s truly just how they sound like. That’s it. It was written beautifully. That’s a silly one, but I think silly deep dives are probably good for the soul right now.

What is the one tab you always regret closing?

Probably my calendar… And honestly, I always have Merriam-Webster and Britannica up. And I rarely do close them because I always need them for my work.

What can you not stop talking about on the internet right now?

So Merriam-Webster is releasing its first print dictionary in over 20 years. And they made it really pretty, and it feels like a really cool book that you would display. I’m very excited because I’m doing deep dives of old ads for an almost 200-year-old company. There’s a lot of stuff to go through. Some of it we have in the archives, some of it is just out there. So just going through the old print stuff, finding old paper dictionaries. So, like, selfishly, I’m excited for the new collegiate 12th edition.

What was the first online community you engaged with?

I’m a lurker, so engagement is a lot for me. The first time I probably posted was on a forum when I moved to Chicago to do improv comedy. There’s a Chicago improv forum and I think I was like, “What show should I see?”

What articles and/or videos are you waiting to read/watch right now?

I’m waiting for the next [recommendation] from my group chats. There are some people that will just send you anything, and you’re like, “OK, thank you for sending me this. I’ll watch 30% of the things you sent.” But there’s the ones that you’re like, “Oh, yeah, gotta watch that.” So I’ve got a couple friends like that, so I hope they send me stuff because Lord knows, the internet’s huge.

Is there anything about the way people engage with Merriam-Webster online that makes you feel hopeful about the internet?

Oh, 4,000%. Yes, doomscrolling is a reality of being online now. I know a lot of people who just step away and go outside and touch grass.

But there’s still good stuff happening. The comment sections on our Instagram and TikTok can actually be really fun. People have genuine, kind, often funny conversations. It’s rarely mean. Seeing that makes me hopeful, because people clearly want wholesome, thoughtful interactions.

People have a personal connection to language. Over time, I’ve seen our audience expand to include all kinds of people who care deeply about words, even if they wouldn’t call themselves “word nerds.” Language is personal, and I think our work celebrates that.

And honestly, I feel more hopeful doing this job on the internet than I think I would if I weren’t doing this work and was just online as a regular user.


John Sabine is the social media director for Merriam-Webster and Encyclopedia Britannica. He is originally from Dallas, Texas, and he’s never once spelled “definitely” correctly on the first try.

The post The social media director who helps make Merriam-Webster go viral appeared first on The Mozilla Blog.

Mozilla Performance BlogFirefox 144 ships interactionId for INP


TL;DR

Firefox 144 ships PerformanceEventTiming.interactionId, which lets browsers and tools group events that belong to the same user interaction. This property is used to calculate Interaction to Next Paint (INP), one of the Core Web Vitals.


Firefox 144 ships support for the PerformanceEventTiming.interactionId property. It helps browsers and tools identify which input events belong to a single user interaction, such as a pointerdown, pointerup, and click triggered by the same tap.

The Interaction to Next Paint (INP) metric, part of the Core Web Vitals, relies on this grouping to measure how responsive a page feels during real user interactions. INP represents how long it takes for the next frame to paint after a user input. Instead of looking at a single event, it captures the worst interaction latency during the page’s lifetime, giving a more complete view of responsiveness.

Why this matters

Before interactionId, each event had to be measured separately, which made it hard to connect related events as part of the same interaction.
With this property, performance tools and developers can now:

  • Group related input events into a single interaction
  • Measure interaction latency more accurately
  • Identify and debug slow interactions more easily

Using interactionId

If you use the PerformanceObserver API to collect PerformanceEventTiming entries, you’ll start seeing an interactionId field in Firefox 144. Events that share a non-zero interactionId belong to the same interaction group, which can be used to calculate latency or understand where delays occur.

// The key is the interaction ID.
let eventLatencies = {};

const observer = new PerformanceObserver((list) => {
  list.getEntries().forEach((entry) => {
    if (entry.interactionId > 0) {
      const interactionId = entry.interactionId;
      if (!eventLatencies[interactionId]) {
        eventLatencies[interactionId] = [];
      }
      eventLatencies[interactionId].push(entry.duration);
    }
  });
});

observer.observe({ type: "event", buffered: true });

// Log events with maximum event duration for a user interaction
Object.entries(eventLatencies).forEach(([k, v]) => {
  console.log(Math.max(...v));
});

If you use external tools and libraries like web-vitals, they should already collect the INP value for you.

Firefox Developer ExperienceFirefox WebDriver Newsletter 144

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 144 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

Marionette

The Mozilla BlogChoose how you search and stay organized with Firefox

llustration showing Firefox’s browser interface with a focus on search options. A bar labeled “Use visual search” overlaps an image of a floral painting, while another option labeled “with the Perplexity icon” appears below it. A small browser window shows a cropped view of the same artwork. The Firefox toolbar is visible at the bottom with a highlighted smiley face icon. The background is a gradient of purple and blue with grid lines and sparkles, conveying a playful, tech-inspired design.

At Mozilla, we build Firefox around one principle: putting you in control. With today’s release, we’re introducing new features that make browsing smarter and more personal while staying true to the values you care about most: privacy and choice.

A new option for search, still on your terms.

Earlier this year, we gave you more choice in how you search by testing Perplexity, an AI-powered answer engine, as a search option on Firefox. Now, after positive feedback, we’re making it a fixture, rolling it out to more users for desktop. Perplexity provides conversational answers with citations, so you can validate information without digging through pages of results.

This addition reflects our shared commitment to choice: You decide when to use an AI answer engine, or if you want to use it at all. Available globally, Perplexity can be found in the unified search button in the address bar. We’ll be bringing Perplexity to mobile in the coming months. And as always, privacy matters – Perplexity maintains strict prohibitions against selling or sharing personal data.

Organize your life with profiles

At the beginning of the year, we started testing profiles — a way to create and switch between different browsing setups. After months of gradual rollout and community feedback, profiles are now available to everyone.

Firefox Profiles feature shown with an illustration of three foxes and a setup screen for creating and customizing browser profiles.<figcaption class="wp-element-caption">Create and switch between different browsing setups</figcaption>

Profiles let you keep work tabs distinct from personal browsing, or dedicate a setup to testing extensions or managing a specific project. Each profile runs independently, giving you flexibility and focus. Feedback from students, professionals and contributors helped us refine this feature into the version you see today.

Discover more with visual search

In September, we announced visual search on Mozilla Connect and began rolling it out for testing. Powered by Google Lens, it lets you search what you see with a simple right-click on any image.

<figcaption class="wp-element-caption">Search what you see with a simple right-click on an image</figcaption>

You can:

  • Find similar products, places or objects 
  • Copy, translate or search text from images
  • Get inspiration for learning, travel or research

This desktop-only feature makes searching more intuitive and curiosity-driven. For now, it requires Google as your default search engine. Tell us what you think. Your feedback will guide where visual search appears next, from the address bar to mobile.

Evolving to meet your needs

Today’s release brings more ways to browse on your terms — from smarter search with Perplexity, to profiles that let you separate work from play, to visual search.

Each of these features reflects what matters most to us: putting you in control of your online experience and building alongside the community that inspires Firefox. With your feedback, we’ll keep shaping a browser that not only keeps pace with the future of the web but also stays true to the open values you trust.

We’re excited to see how you use what’s new, and can’t wait to share what’s next.

Take control of your internet

Download Firefox

The post Choose how you search and stay organized with Firefox appeared first on The Mozilla Blog.

Niko MatsakisWe need (at least) ergonomic, explicit handles

Continuing my discussion on Ergonomic RC, I want to focus on the core question: should users have to explicitly invoke handle/clone, or not? This whole “Ergonomic RC” work was originally proposed by Dioxus and their answer is simple: definitely not. For the kind of high-level GUI applications they are building, having to call cx.handle() to clone a ref-counted value is pure noise. For that matter, for a lot of Rust apps, even cloning a string or a vector is no big deal. On the other hand, for a lot of applications, the answer is definitely yes – knowing where handles are created can impact performance, memory usage, and even correctness (don’t worry, I’ll give examples later in the post). So how do we reconcile this?

This blog argues that we should make it ergonomic to be explicit. This wasn’t always my position, but after an impactful conversation with Josh Triplett, I’ve come around. I think it aligns with what I once called the soul of Rust: we want to be ergonomic, yes, but we want to be ergonomic while giving control1.

I like Tyler Mandry’s Clarity of purpose contruction, “Great code brings only the important characteristics of your application to your attention”. The key point is that there is great code in which cloning and handles are important characteristics, so we need to make that code possible to express nicely. This is particularly true since Rust is one of the very few languages that really targets that kind of low-level, foundational code.

This does not mean we cannot (later) support automatic clones and handles. It’s inarguable that this would benefit clarity of purpose for a lot of Rust code. But I think we should focus first on the harder case, the case where explicitness is needed, and get that as nice as we can; then we can circle back and decide whether to also support something automatic. One of the questions for me, in fact, is whether we can get “fully explicit” to be nice enough that we don’t really need the automatic version. There are benefits from having “one Rust”, where all code follows roughly the same patterns, where those patterns are perfect some of the time, and don’t suck too bad2 when they’re overkill.

“Rust should not surprise you.” (hat tip: Josh Triplett)

I mentioned this blog post resulted from a long conversation with Josh Triplett3. The key phrase that stuck with me from that conversation was: Rust should not surprise you. The way I think of it is like this. Every programmer knows what its like to have a marathon debugging session – to sit and state at code for days and think, but… how is this even POSSIBLE? Those kind of bug hunts can end in a few different ways. Occasionally you uncover a deeply satisfying, subtle bug in your logic. More often, you find that you wrote if foo and not if !foo. And occasionally you find out that your language was doing something that you didn’t expect. That some simple-looking code concealed a subltle, complex interaction. People often call this kind of a footgun.

Overall, Rust is remarkably good at avoiding footguns4. And part of how we’ve achieved that is by making sure that things you might need to know are visible – like, explicit in the source. Every time you see a Rust match, you don’t have to ask yourself “what cases might be missing here” – the compiler guarantees you they are all there. And when you see a call to a Rust function, you don’t have to ask yourself if it is fallible – you’ll see a ? if it is.5

Creating a handle can definitely “surprise” you

So I guess the question is: would you ever have to know about a ref-count increment? The trick part is that the answer here is application dependent. For some low-level applications, definitely yes: an atomic reference count is a measurable cost. To be honest, I would wager that the set of applications where this is true are vanishingly small. And even in those applications, Rust already improves on the state of the art by giving you the ability to choose between Rc and Arc and then proving that you don’t mess it up.

But there are other reasons you might want to track reference counts, and those are less easy to dismiss. One of them is memory leaks. Rust, unlike GC’d languages, has deterministic destruction. This is cool, because it means that you can leverage destructors to manage all kinds of resources, as Yehuda wrote about long ago in his classic ode-to-RAII entitled “Rust means never having to close a socket”. But although the points where handles are created and destroyed is deterministic, the nature of reference-counting can make it much harder to predict when the underlying resource will actually get freed. And if those increments are not visible in your code, it is that much harder to track them down.

Just recently, I was debugging Symposium, which is written in Swift. Somehow I had two IPCManager instances when I only expected one, and each of them was responding to every IPC message, wreaking havoc. Poking around I found stray references floating around in some surprising places, which was causing the problem. Would this bug have still occurred if I had to write .handle() explicitly to increment the ref count? Definitely, yes. Would it have been easier to find after the fact? Also yes.6

Josh gave me a similar example from the “bytes” crate. A Bytes type is a handle to a slice of some underlying memory buffer. When you clone that handle, it will keep the entire backing buffer around. Sometimes you might prefer to copy your slice out into a separate buffer so that the underlying buffer can be freed. It’s not that hard for me to imagine trying to hunt down an errant handle that is keeping some large buffer alive and being very frustrated that I can’t see explicitly in the where those handles are created.

A similar case occurs with APIs like like Arc::get_mut7. get_mut takes an &mut Arc<T> and, if the ref-count is 1, returns an &mut T. This lets you take a shareable handle that you know is not actually being shared and recover uniqueness. This kind of API is not frequently used – but when you need it, it’s so nice it’s there.

“What I love about Rust is its versatility: low to high in one language” (hat tip: Alex Crichton)

Entering the conversation with Josh, I was leaning towards a design where you had some form of automated cloning of handles and an allow-by-default lint that would let crates which don’t want that turn it off. But Josh convinced me that there is a significant class of applications that want handle creation to be ergonomic AND visible (i.e., explicit in the source). Low-level network services and even things like Rust For Linux likely fit this description, but any Rust application that uses get_mut or make_mut might also.

And this reminded me of something Alex Crichton once said to me. Unlike the other quotes here, it wasn’t in the context of ergonomic ref-counting, but rather when I was working on my first attempt at the “Rustacean Principles”. Alex was saying that he loved how Rust was great for low-level code but also worked well high-level stuff like CLI tools and simple scripts.

I feel like you can interpret Alex’s quote in two ways, depending on what you choose to emphasize. You could hear it as, “It’s important that Rust is good for high-level use cases”. That is true, and it is what leads us to ask whether we should even make handles visible at all.

But you can also read Alex’s quote as, “It’s important that there’s one language that works well enough for both” – and I think that’s true too. The “true Rust gestalt” is when we manage to simultaneously give you the low-level control that grungy code needs but wrapped in a high-level package. This is the promise of zero-cost abstractions, of course, and Rust (in its best moments) delivers.

The “soul of Rust”: low-level enough for a kernel, usable enough for a GUI

Let’s be honest. High-level GUI programming is not Rust’s bread-and-butter, and it never will be; users will never confuse Rust for TypeScript. But then, TypeScript will never be in the Linux kernel.

The goal of Rust is to be a single language that can, by and large, be “good enough” for both extremes. The goal is make enough low-level details visible for kernel hackers but do so in a way that is usable enough for a GUI. It ain’t easy, but it’s the job.

This isn’t the first time that Josh has pulled me back to this realization. The last time was in the context of async fn in dyn traits, and it led to a blog post talking about the “soul of Rust” and a followup going into greater detail. I think the catchphrase “low-level enough for a Kernel, usable enough for a GUI” kind of captures it.

Conclusion: Explicit handles should be the first step, but it doesn’t have to be the final step

There is a slight caveat I want to add. I think another part of Rust’s soul is preferring nuance to artificial simplicity (“as simple as possible, but no simpler”, as they say). And I think the reality is that there’s a huge set of applications that make new handles left-and-right (particularly but not exclusively in async land8) and where explicitly creating new handles is noise, not signal. This is why e.g. Swift9 makes ref-count increments invisible – and they get a big lift out of that!10 I’d wager most Swift users don’t even realize that Swift is not garbage-collected11.

But the key thing here is that even if we do add some way to make handle creation automatic, we ALSO want a mode where it is explicit and visible. So we might as well do that one first.

OK, I think I’ve made this point 3 ways from Sunday now, so I’ll stop. The next few blog posts in the series will dive into (at least) two options for how we might make handle creation and closures more ergonomic while retaining explicitness.


  1. I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee ↩︎

  2. It’s an industry term↩︎

  3. Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most. ↩︎

  4. Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post. ↩︎

  5. Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users. ↩︎

  6. In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring. ↩︎

  7. Or Arc::make_mut, which is one of my favorite APIs. It takes an Arc<_> and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍 ↩︎

  8. My experience is that, due to language limitations we really should fix, many async constructs force you into 'static bounds which in turn force you into Rc and Arc where you’d otherwise have been able to use &↩︎

  9. I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more↩︎

  10. Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion). ↩︎

  11. Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard. ↩︎

Mozilla ThunderbirdState of the Thunder 13: How We Make Our Roadmap

Welcome back to our thirteenth episode of State of the Thunder! Nothing unlucky about this latest installment, as Managing Director Ryan Sipes walks us through how Thunderbird creates its roadmap. Unlike other companies where roadmaps are driven solely by business needs, Thunderbird is working with our community governance and feedback from the wider user community to keep us honest even as we move forward.

Want to find out how to join future State of the Thunders? Be sure to join our Thunderbird planning mailing list for all the details.

Open Source, Open Roadmaps

In other companies, product managers tend to draft roadmaps based on business needs. Publishing that roadmap might be an afterthought, or might not happen at all. Thunderbird, however, is open source, so that’s not our process.

A quick history lesson provides some needed context. Eight years ago, Thunderbird was solely a community project driven by a community council. We didn’t have a roadmap like we do today. With the earlier loss of funding and support, the project was in triage mode. Since then, thanks to a wonderful user community who has donated their skill, time, and money, we’ve changed our roadmap process.

The Supernova release (Thunderbird 115) was where we first really focused on making a roadmap with a coherent product vision: a modernized app in performance and appearance. We developed this roadmap with input from the community, even if there was pushback to a UI change.

The 2026 Roadmap Process

At this point, the project has bylaws for the roadmap process, which unites the Thunderbird Council, MZLA staff, and user feedback. Over the past year we’ve added two new roadmaps: one for the Android app and another for ThunderbirdPro. (Note, iOS doesn’t have a roadmap yet. Our current goal is: let’s be able to receive email!) But even with these changes and additions, the Mozilla Manifesto is still at the heart of everything we do. We firmly believe that making roadmaps with community governance and feedback from the larger community keeps us honest and helps us make products that genuinely improve people’s lives.

Want to see how our 2025-2026 Roadmaps are taking shape? Check out the Desktop Roadmap, as well the mobile roadmaps for Android and iOS.

Questions

Integrating Community Contributions

In the past, community contributors have picked up “nice to have” issues and developed them alongside us. Or people want to pursue problems or challenges that affect them the most. Sometimes, either of these scenarios coincide with our roadmap, and we get features like the new drag and drop folders!

Needless to say, we love when the community helps us get the product where we hope it will go. Sometimes, we have to pause development because of shifted priorities, and we’re trying to get better at updating contributors when these shifts happen on places like the tb-planning and mobile-planning mailing lists.

And these community contributions aren’t just code! Testing is a crucial way to help make Thunderbird shine on desktop and mobile. Community suggestions on Mozilla Connect help us dream big, as we discussed in the last two episodes. Reporting bugs, either on Bugzilla for the desktop app or GitHub for the Android app, help us know when things aren’t working. We encourage our community to learn more about the Council, and don’t be afraid to get in touch with them at council@thunderbird.net.

Telemetry and the Roadmap

While we know there are passionate debates on telemetry in the open source community, we want to mention how respectful telemetry can make Thunderbird better. Our telemetry helps us see what features are important, and which ones just clutter up the UI. We don’t collect Personally Identifying Information (PII), and our code is open so you can check us on this. Unlike Outlook, who shares their data with 801 partners, we don’t. You can read all about what we use and how we use it here.

So if you have telemetry turned off, please, we ask you to turn it on, and if it’s already on, to keep it on! Especially if you’re a Linux user, enabling telemetry helps us have a better gauge of our Linux user base and how to best support you.

Roadmap Categories and Organizing

Should we try to ‘bucket’ similar items on our roadmap and spread development evenly between them, or should we concentrate on the bucket that needs it most? The answer to this question depends on who you ask! Sometimes we’re focused on a particular area of focus, like UI work in Supernova and current UX work in Calendar. Sometimes we’re working to pay down tech debt across our code. That effort in reducing tech debt can pave the way for future work, like the current efforts to modernize our database so we can have a true Conversation View and other features. Sometimes roadmaps reveal obstacles you have to overcome, and Ryan thinks we’re getting faster at this.

Where to see the roadmaps

The current desktop roadmap is here, while the current Android roadmap is on our GitHub repo. In the future, we’re hoping to update where these roadmaps live, how they look, and how you can interact with them. (Ryan is particularly partial to Obsidian’s roadmap.) We ultimately want our roadmaps to be storytelling devices, and to keep them more updated to any recent changes.

Current Calls for Involvement

Join us for the last few days of testing EWS mail support! Also, we had a fantastic time with the Ask a Fox replython, and would love if you helped us answer support questions on SUMO.

Watch the Video (also on PeerTube)

Listen to the Podcast

The post State of the Thunder 13: How We Make Our Roadmap appeared first on The Thunderbird Blog.

The Mozilla BlogShake to Summarize recognized with special mention in TIME’s Best Inventions of 2025

Illustration featuring a TIME magazine cover titled “Best Inventions of 2025,” showing a humanoid robot folding clothes, alongside a smartphone displaying the Firefox logo and a screen reading “Summarizing…” with a dessert recipe below it.<figcaption class="wp-element-caption">Cover credit: Photography by Spencer Lowell for TIME </figcaption>

Shake to Summarize has been recognized with a Special Mention in TIME’s Best Inventions of 2025.

Each year TIME spotlights a range of new industry-defining innovations across consumer electronics, health tech, apps and beyond. This year, Firefox’s Shake to Summarize feature made the list for bringing a smart solution to a modern user problem: information overload. 

With a single shake or tap, users on iOS devices can get to the heart of an article in seconds. The cool part? Summaries adapt to what you’re reading: recipes pull out the steps for cooking, sports focus on game scores and stats, and news highlights the key takeaways from a story.

“We’re thrilled to see Firefox earn a TIME Best Inventions 2025 Special Mention! Our work on Shake to Summarize reflects how Firefox is evolving,” said Anthony Enzor-DeMeo, general manager of Firefox. “We’re reimagining our browser to fit seamlessly into modern life, helping people browse with less clutter and more focus. The feature is also part of our efforts to give mobile users a cleaner UI and smarter tools that make browsing on the go fast, seamless, and even fun.”

Launched in September 2025 and currently available to English-language users in the U.S., Shake to Summarize generates summaries using Apple Intelligence on iPhone 15 Pro or later running iOS 26 or above, and Mozilla-hosted AI for other devices running iOS 16 or above.

“This recognition is a testament to the incredible work of our UX, design, product, and engineering teams who brought this innovation to life, showcasing that Firefox continues to lead with purpose, creativity, and a deep commitment to user-centric design. Big thank you!” added Enzor-DeMeo.

The Firefox team is working on making the feature available to more users and for those on Android. In the meantime, iOS users can already make the most of Shake to Summarize available in the Apple app store now.

Take control of your internet

Download Firefox

The post Shake to Summarize recognized with special mention in TIME’s Best Inventions of 2025 appeared first on The Mozilla Blog.

Mozilla ThunderbirdState Of The Bird 2024/25

The past twelve months have been another remarkable chapter in Thunderbird’s journey. Together, we started expanding Thunderbird beyond its strong desktop roots, introducing it to smartphones and web browsers to make it more accessible to more people. Thunderbird for Android arrived in the fall and has been steadily improving thanks to our growing mobile team, as well as feedback and contributions from our growing global family. A few months later, in December 2024, we celebrated an extraordinary milestone: 20 years of Thunderbird! We also looked toward a sustainable future with the announcement of Thunderbird Pro, with one of its first services, Appointment, already finding an audience in closed beta. 

The past year also saw a shift in how Thunderbird evolves. Although we recently released our latest annual ESR update (codenamed Eclipse), the bigger news is that our team built the new Monthly Release channel, which is now the default for most of you. This change means you’ll see more frequent updates that make Thunderbird feel fresher, more responsive, and more in tune with your personalized needs. 
Before diving into all the details, I want to pause and express our deepest gratitude to the incredible global community that makes all of this possible. To the hundreds of thousands of people who donated financially, the volunteers who contributed their time and expertise, and the beta testers who carefully helped us polish each update: thank you! Thunderbird thrives because of you. Every milestone we celebrate is a shared achievement, and a shining example of the power of community-driven, open source software development.

Team and Product Updates

Desktop and release updates

In December 2024, we celebrated Thunderbird’s 20th anniversary. Two decades of proving that email software can be both powerful and principled was not without its ups and downs, but that milestone reaffirmed something we hear so often from our community: Thunderbird continues to matter deeply to people all over the world. 

One of the biggest changes this year was the introduction of a new monthly release channel, simply called “Thunderbird Release.” Making this shift required an enormous amount of coordination and care across our desktop and release teams. Unlike the long-standing Extended Support Release (ESR), which provides a single major update every July, the new Thunderbird Release delivers monthly updates. This approach means we can bring you useful improvements and new features significantly faster, while keeping the stability and reliability you rely on.

Over the past year, our desktop team focused heavily on introducing changes that people have been asking for. Specifically, changes that make Thunderbird feel more efficient, intuitive, and modern. We improved visual consistency across system themes, gave you more ways to control the appearance of your message lists and how they’re organized, modernized notifications with native OS integration and quick actions, and moved closer to full Microsoft Exchange support. 

Many of you who switched from the ESR to the new Thunderbird Release channel started seeing these updates as early as April. For those who stuck with the ESR, the annual update, codenamed Eclipse, arrived in July. Thanks to the solid foundation established in those smaller monthly updates, Eclipse enjoyed the smoothest rollout of any annual release in Thunderbird’s history. 

In-depth details on Desktop development can be found in our monthly Developer Digest updates on our blog. 

Thunderbird Mobile

Android

It took longer than we originally anticipated, but Thunderbird has finally arrived as a true smartphone app. The launch of Thunderbird for Android in October 2024 was one of our most exciting steps forward in years. Releasing it took more than two years of active development, beta testing, and invaluable community feedback. 

​​This milestone was made possible by transforming the much-loved K-9 Mail app into something we could proudly call Thunderbird. That process included a full redesign of the interface, including bringing it up to modern design standards, and building an easy way for people to bring their existing Thunderbird desktop accounts directly into the Android app.

We’ve been encouraged by the enthusiastic response to Thunderbird on Android, but we’re also listening closely to your feedback. Our team, together with community contributors, has one very focused goal: to make Thunderbird the best Android email app available. 

iOS

We’ve also seen the overwhelming demand to build a version of Thunderbird for the iOS community. Unlike the Android app, the iOS app is being built from the ground up. 

Fortunately, Thunderbird for iOS took some major steps forward this year. We published the initial repository (a central location for open-source project files and code) for the Thunderbird mobile team and contributors to work together, and we’re laying the groundwork for public testing. 

Our goal for the first public alpha will be to support manual account setup and basic inbox viewing to meet Apple’s minimum review standards. These early pre-release versions will be distributed through TestFlight, allowing Thunderbird for iOS to benefit from your real-world feedback. 

When we started building Thunderbird for iOS, a core decision was made to use a modern foundation (JMAP) designed for mobile devices. This will allow for, among other advantages, faster mail synchronization and more efficient resource usage. The first pieces of that foundation are already in place, with the basic ability to view folders and messages. We’ve also set up internal tools that will make regular updates, language translations, and community testing possible. 

Thunderbird for iOS is still in the early stages of development, but momentum is strong, our team is growing, and we’re confidently moving toward the first community-accessible release. 

In depth details on mobile development can be found in our monthly Mobile Progress Report on our blog. 

Thundermail and Thunderbird Pro services

It’s no secret we’ve been building additional web services under the Thunderbird Pro name, and 2025 marked a pivotal moment in our vision for a complete, open-source Thunderbird ecosystem. 

This year we announced Thundermail, a dedicated email service by Thunderbird. During the past decade, we’ve seen a large move away from dedicated email clients to products like Gmail, partially because of the robust ecosystem around them. The plan for Thundermail is to eventually offer an alternative webmail solution that protects your privacy, and doesn’t use your messages to train AI or show you ads. 

Here’s what else we’ve been working on in addition to Thundermail: 

During its current beta, Thunderbird Appointment saw great improvements in managing your schedule, with many of the changes focused on reliability and visual polish.

Thunderbird Send, an app for securely sharing encrypted files, also saw forward momentum. Together, these services are steadily moving toward a wider beta launch this fall, and we’re excited to see how you’ll use them to improve your personal and professional lives. 

All of the work going into Thundermail and Thunderbird Pro services is guided by a clear goal: providing you with an ethical alternative to the closed-off “walled gardens” that dominate our digital communication. You shouldn’t have to sacrifice your values and give up your personal data to enjoy convenience and powerful features. 

In depth details on Thunderbird Pro development can be found in our Thunderbird Pro updates on our blog. 

2024 Financial Picture

The generosity of our donors continues to power everything we do, and the importance of these financial contributions cannot be understated. In 2024, the Thunderbird project once again saw continued growth in donations which paved the way for Thundermail and the Thunderbird Pro services you just read about. It also gave us the opportunity to grow our mobile development team, improve our user support outreach, and expand our connections to the community. 

Here’s a detailed breakdown of our donation revenue in 2024, and why many of these statistics are so meaningful. 

Contribution Revenue

In 2024, financial contributions to Thunderbird reached $10.3 million, representing a 19% increase over the previous year. This support came courtesy of more than 539,000 transactions from more than 335,000 individual donors. A healthy 25% of these contributions were given as recurring monthly support.

What makes this so meaningful to us isn’t the total revenue, or the scale of the donations. It’s how those donations break down. The average contribution was $18.88, with a median of $16.66. Among our recurring donors, the average monthly gift was only $6.25. In fact, 53% of all donations were $20 or less, and 94% were $35 or less. Only 17 contributions were $1,000 or more. 

What does this represent when we go beyond the numbers? It means Thunderbird isn’t sustained by a handful of wealthy benefactors or corporate sponsors. Rather, it is sustained by a global community of people who believe in what we’ve built and what we’re still building, and they come together to keep it moving forward.

And that global reach continues to inspire us. We received contributions from more than 200 countries. The top ten contributing countries – Germany, the United States, France, the United Kingdom, Switzerland, the Netherlands, Japan, Italy, Austria, and Canada – accounted for 83% of our total revenue.

But products aren’t just numbers and code. Products are the people that work on them. To support the ambitions of our expanding roadmap, our team grew significantly in 2024. We added 14 new team members throughout the year, closing out 2024 with 43 full-time staff members. Much of this growth strengthened our mobile development, web services, and desktop + release teams. 80% of our staff focuses on technical work – things like product development and infrastructure – but we also added more roles to actively support users, improve community outreach, and smooth out internal operations. 

Expenses

When we talk about how we use financial contributions, we’re really talking about investments in our shared values. The majority of our spending goes to personnel; the talented individuals who write code, design interfaces, test features, and support our users. Infrastructure is the next largest expense, followed by administrative costs to keep operations running smoothly. 

Below is a breakdown of our 2024 expenses:

Community Snapshot

Contributor & Community Growth

For two decades, Thunderbird has survived and thrived because of its dedicated open-source community. In 2024, we continued using our Bitergia dashboard to give our community a clear view of the project’s overall activity across the board. (You can read more about how we collaborated on and use this beneficial tool here.)

This dashboard helps us track participation, identify and celebrate successes, and find areas to improve, which is especially important as we expand the Thunderbird ecosystem with new products and services. 

For this report, we’ve highlighted some of the most notable community metrics and growth milestones from 2024. 

For reference, Github and Bugzilla measure developer contributions. TopicBox measures activity across our many mailing lists. Pontoon measures the activity from volunteers who help us translate and localize Thunderbird. SUMO (the Mozilla support website) measures the impact of Thunderbird’s support volunteers who engage with our users and respond to their varied support questions.

We estimate that in 2024, the total number of people who contributed to Thunderbird – by writing code, answering support questions, providing translations, or other meaningful areas – is more than 20,000. 

It’s especially encouraging to see the number of translation locales increase from 58 to 70, as Thunderbird continues to find new users around the world. 

But there are areas of opportunity, too. For example, making it less complicated for people who want to start contributing to Thunderbird. We’ve started addressing this by recording two Community Office Hours videos, talking about how to write Knowledge Base articles, and how to effectively answer questions on the Mozilla Support website. 

Mozilla Connect is another portal that lets anyone interested in the betterment of Thunderbird suggest ideas, openly discuss them, and vote on them. In 2024, four desktop ideas as well as four of your ideas in our relatively new mobile space were implemented, and we saw more than 500 new thoughtful ideas suggested across mobile and desktop. Our staff and community are watching for your ideas, so keep them coming! 

Thank you

As we close out this year’s State of the Bird, we want to once again shine a light on the incredible global community of Thunderbird supporters. Whether you’ve contributed your valuable time, financial donations, or simply shared Thunderbird with colleagues, friends, and family, your support continues to brighten Thunderbird’s future. 

After all, products aren’t just numbers on a chart. Products are the people who create them, support them, improve them, and believe in crucial concepts like privacy, digital wellbeing, and open standards. 

We’re so very grateful to you.

The post State Of The Bird 2024/25 appeared first on The Thunderbird Blog.